Dec 13 02:07:28.903735 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 02:07:28.903760 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 02:07:28.903770 kernel: KASLR enabled Dec 13 02:07:28.903776 kernel: efi: EFI v2.7 by EDK II Dec 13 02:07:28.903782 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x1347a1018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 02:07:28.903787 kernel: random: crng init done Dec 13 02:07:28.903794 kernel: ACPI: Early table checksum verification disabled Dec 13 02:07:28.903800 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 02:07:28.903806 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 02:07:28.903812 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903820 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903825 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903831 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903837 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903851 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903859 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903866 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903872 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:07:28.903878 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 02:07:28.903885 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 02:07:28.903891 kernel: NUMA: Failed to initialise from firmware Dec 13 02:07:28.903897 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 02:07:28.903903 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 02:07:28.903909 kernel: Zone ranges: Dec 13 02:07:28.903916 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 02:07:28.903922 kernel: DMA32 empty Dec 13 02:07:28.903930 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 02:07:28.903936 kernel: Movable zone start for each node Dec 13 02:07:28.903942 kernel: Early memory node ranges Dec 13 02:07:28.903948 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 02:07:28.903955 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 02:07:28.903961 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 02:07:28.903967 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 02:07:28.903974 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 02:07:28.903980 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 02:07:28.903986 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 02:07:28.903992 kernel: psci: probing for conduit method from ACPI. Dec 13 02:07:28.904000 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 02:07:28.904006 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 02:07:28.904013 kernel: psci: Trusted OS migration not required Dec 13 02:07:28.904022 kernel: psci: SMC Calling Convention v1.1 Dec 13 02:07:28.904028 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 02:07:28.904035 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 02:07:28.904043 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 02:07:28.904050 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 02:07:28.904068 kernel: Detected PIPT I-cache on CPU0 Dec 13 02:07:28.904076 kernel: CPU features: detected: GIC system register CPU interface Dec 13 02:07:28.904099 kernel: CPU features: detected: Hardware dirty bit management Dec 13 02:07:28.904106 kernel: CPU features: detected: Spectre-v4 Dec 13 02:07:28.904113 kernel: CPU features: detected: Spectre-BHB Dec 13 02:07:28.904119 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 02:07:28.904126 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 02:07:28.904133 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 02:07:28.904139 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 02:07:28.904149 kernel: alternatives: applying boot alternatives Dec 13 02:07:28.904157 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 02:07:28.904164 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:07:28.904171 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:07:28.904178 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:07:28.904184 kernel: Fallback order for Node 0: 0 Dec 13 02:07:28.904191 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 02:07:28.904197 kernel: Policy zone: Normal Dec 13 02:07:28.904204 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:07:28.904211 kernel: software IO TLB: area num 2. Dec 13 02:07:28.904218 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 02:07:28.904226 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Dec 13 02:07:28.904233 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:07:28.904240 kernel: trace event string verifier disabled Dec 13 02:07:28.904246 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 02:07:28.904254 kernel: rcu: RCU event tracing is enabled. Dec 13 02:07:28.904261 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:07:28.904268 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 02:07:28.904274 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:07:28.904281 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:07:28.904288 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:07:28.904294 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 02:07:28.904302 kernel: GICv3: 256 SPIs implemented Dec 13 02:07:28.904309 kernel: GICv3: 0 Extended SPIs implemented Dec 13 02:07:28.904316 kernel: Root IRQ handler: gic_handle_irq Dec 13 02:07:28.904322 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 02:07:28.904329 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 02:07:28.904336 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 02:07:28.904343 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 02:07:28.904350 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 02:07:28.904357 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 02:07:28.904364 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 02:07:28.904370 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 02:07:28.904378 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 02:07:28.904385 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 02:07:28.904392 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 02:07:28.904399 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 02:07:28.904405 kernel: Console: colour dummy device 80x25 Dec 13 02:07:28.904413 kernel: ACPI: Core revision 20230628 Dec 13 02:07:28.904420 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 02:07:28.904427 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:07:28.904434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 02:07:28.904441 kernel: landlock: Up and running. Dec 13 02:07:28.904449 kernel: SELinux: Initializing. Dec 13 02:07:28.904456 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:28.904463 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:07:28.904470 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:07:28.904477 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 02:07:28.904484 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:07:28.904491 kernel: rcu: Max phase no-delay instances is 400. Dec 13 02:07:28.904498 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 02:07:28.904505 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 02:07:28.904513 kernel: Remapping and enabling EFI services. Dec 13 02:07:28.904520 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:07:28.904527 kernel: Detected PIPT I-cache on CPU1 Dec 13 02:07:28.904534 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 02:07:28.904541 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 02:07:28.904548 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 02:07:28.904555 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 02:07:28.904562 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:07:28.904569 kernel: SMP: Total of 2 processors activated. Dec 13 02:07:28.904575 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 02:07:28.904584 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 02:07:28.904591 kernel: CPU features: detected: Common not Private translations Dec 13 02:07:28.904603 kernel: CPU features: detected: CRC32 instructions Dec 13 02:07:28.904612 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 02:07:28.904619 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 02:07:28.904626 kernel: CPU features: detected: LSE atomic instructions Dec 13 02:07:28.904633 kernel: CPU features: detected: Privileged Access Never Dec 13 02:07:28.904641 kernel: CPU features: detected: RAS Extension Support Dec 13 02:07:28.904648 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 02:07:28.904657 kernel: CPU: All CPU(s) started at EL1 Dec 13 02:07:28.904664 kernel: alternatives: applying system-wide alternatives Dec 13 02:07:28.904671 kernel: devtmpfs: initialized Dec 13 02:07:28.904678 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:07:28.904686 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:07:28.904693 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:07:28.904700 kernel: SMBIOS 3.0.0 present. Dec 13 02:07:28.904709 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 02:07:28.904716 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:07:28.904723 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 02:07:28.904731 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 02:07:28.904738 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 02:07:28.904745 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:07:28.904753 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Dec 13 02:07:28.904760 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:07:28.904767 kernel: cpuidle: using governor menu Dec 13 02:07:28.904776 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 02:07:28.904783 kernel: ASID allocator initialised with 32768 entries Dec 13 02:07:28.904790 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:07:28.904798 kernel: Serial: AMBA PL011 UART driver Dec 13 02:07:28.904805 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 02:07:28.904812 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 02:07:28.904819 kernel: Modules: 509040 pages in range for PLT usage Dec 13 02:07:28.904827 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:07:28.904834 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 02:07:28.904842 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 02:07:28.904850 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 02:07:28.904857 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:07:28.904864 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 02:07:28.904872 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 02:07:28.904879 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 02:07:28.904886 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:07:28.904893 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:07:28.904901 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:07:28.904909 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:07:28.904917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:07:28.904924 kernel: ACPI: Interpreter enabled Dec 13 02:07:28.904931 kernel: ACPI: Using GIC for interrupt routing Dec 13 02:07:28.904938 kernel: ACPI: MCFG table detected, 1 entries Dec 13 02:07:28.904946 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 02:07:28.904953 kernel: printk: console [ttyAMA0] enabled Dec 13 02:07:28.904961 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:07:28.905143 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:07:28.905232 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 02:07:28.905299 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 02:07:28.905366 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 02:07:28.905430 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 02:07:28.905440 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 02:07:28.905448 kernel: PCI host bridge to bus 0000:00 Dec 13 02:07:28.905519 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 02:07:28.905583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 02:07:28.905643 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 02:07:28.905703 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:07:28.905789 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 02:07:28.905867 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 02:07:28.905935 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 02:07:28.906006 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 02:07:28.906117 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.906191 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 02:07:28.906268 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.906336 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 02:07:28.906410 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.906482 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 02:07:28.906555 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.906623 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 02:07:28.906698 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.906767 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 02:07:28.906840 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.906934 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 02:07:28.907020 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.907120 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 02:07:28.907198 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.907266 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 02:07:28.907340 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 02:07:28.907408 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 02:07:28.907488 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 02:07:28.907556 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 02:07:28.907634 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:07:28.907705 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 02:07:28.907773 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 02:07:28.907843 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 02:07:28.907922 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 02:07:28.907992 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 02:07:28.908116 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 02:07:28.908194 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 02:07:28.908263 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 02:07:28.908337 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 02:07:28.908405 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 02:07:28.908493 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 02:07:28.908562 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 02:07:28.908638 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 02:07:28.908706 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 02:07:28.908773 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 02:07:28.908847 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 02:07:28.908921 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 02:07:28.908990 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 02:07:28.909080 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 02:07:28.909157 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 02:07:28.909224 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 02:07:28.909292 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 02:07:28.909365 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 02:07:28.909434 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 02:07:28.909502 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 02:07:28.909572 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 02:07:28.909643 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 02:07:28.909710 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 02:07:28.909779 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 02:07:28.912291 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 02:07:28.912381 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 02:07:28.912455 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 02:07:28.912527 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 02:07:28.912596 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 02:07:28.912667 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 02:07:28.912734 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 02:07:28.912800 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 02:07:28.912875 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 02:07:28.912941 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 02:07:28.913007 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 02:07:28.913115 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 02:07:28.913186 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 02:07:28.913250 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 02:07:28.913321 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 02:07:28.913386 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 02:07:28.913456 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 02:07:28.913528 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 02:07:28.913596 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 02:07:28.913667 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 02:07:28.913734 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 02:07:28.913805 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 02:07:28.913873 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 02:07:28.913944 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 02:07:28.914012 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 02:07:28.914457 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 02:07:28.914539 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 02:07:28.914606 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 02:07:28.914671 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 02:07:28.914745 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 02:07:28.914810 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 02:07:28.914877 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 02:07:28.914968 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 02:07:28.915039 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 02:07:28.915627 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 02:07:28.915711 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 02:07:28.915786 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 02:07:28.915856 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 02:07:28.915923 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 02:07:28.915991 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 02:07:28.916071 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 02:07:28.916145 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 02:07:28.916213 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 02:07:28.916282 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 02:07:28.916354 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 02:07:28.916423 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 02:07:28.916489 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 02:07:28.916557 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 02:07:28.916623 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 02:07:28.916690 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 02:07:28.916756 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 02:07:28.916823 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 02:07:28.916894 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 02:07:28.916961 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 02:07:28.917029 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 02:07:28.918213 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 02:07:28.918305 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 02:07:28.918374 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 02:07:28.918441 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 02:07:28.918508 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 02:07:28.918581 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 02:07:28.918647 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 02:07:28.918712 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 02:07:28.918787 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 02:07:28.918855 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 02:07:28.918951 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 02:07:28.919021 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 02:07:28.919124 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 02:07:28.919204 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 02:07:28.919275 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 02:07:28.919344 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 02:07:28.919412 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 02:07:28.919483 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 02:07:28.919549 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 02:07:28.919623 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 02:07:28.919695 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 02:07:28.919760 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 02:07:28.919837 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 02:07:28.919904 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 02:07:28.919979 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 02:07:28.921095 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 02:07:28.921231 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 02:07:28.921300 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 02:07:28.921366 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 02:07:28.921442 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 02:07:28.921511 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 02:07:28.921581 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 02:07:28.921648 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 02:07:28.921720 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 02:07:28.921787 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 02:07:28.921862 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 02:07:28.921931 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 02:07:28.922001 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 02:07:28.922451 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 02:07:28.922540 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 02:07:28.922606 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 02:07:28.922678 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 02:07:28.922746 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 02:07:28.922811 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 02:07:28.922876 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 02:07:28.922998 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 02:07:28.923474 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 02:07:28.923555 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 02:07:28.923622 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 02:07:28.923695 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 02:07:28.923766 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 02:07:28.923827 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 02:07:28.923888 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 02:07:28.923962 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 02:07:28.924026 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 02:07:28.924115 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 02:07:28.924195 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 02:07:28.924262 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 02:07:28.924325 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 02:07:28.924397 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 02:07:28.924460 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 02:07:28.924524 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 02:07:28.924599 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 02:07:28.924663 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 02:07:28.924729 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 02:07:28.924810 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 02:07:28.924876 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 02:07:28.924940 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 02:07:28.925019 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 02:07:28.925169 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 02:07:28.925242 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 02:07:28.925342 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 02:07:28.925410 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 02:07:28.925479 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 02:07:28.925553 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 02:07:28.925623 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 02:07:28.925703 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 02:07:28.925789 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 02:07:28.925853 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 02:07:28.925947 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 02:07:28.925965 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 02:07:28.925975 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 02:07:28.925985 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 02:07:28.925993 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 02:07:28.926001 kernel: iommu: Default domain type: Translated Dec 13 02:07:28.926009 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 02:07:28.926021 kernel: efivars: Registered efivars operations Dec 13 02:07:28.926029 kernel: vgaarb: loaded Dec 13 02:07:28.926039 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 02:07:28.932099 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:07:28.932143 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:07:28.932153 kernel: pnp: PnP ACPI init Dec 13 02:07:28.932318 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 02:07:28.932333 kernel: pnp: PnP ACPI: found 1 devices Dec 13 02:07:28.932341 kernel: NET: Registered PF_INET protocol family Dec 13 02:07:28.932350 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:07:28.932358 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 02:07:28.932375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:07:28.932383 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:07:28.932391 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 02:07:28.932399 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 02:07:28.932407 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:28.932415 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:07:28.932422 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:07:28.932508 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 02:07:28.932521 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:07:28.932531 kernel: kvm [1]: HYP mode not available Dec 13 02:07:28.932539 kernel: Initialise system trusted keyrings Dec 13 02:07:28.932547 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 02:07:28.932554 kernel: Key type asymmetric registered Dec 13 02:07:28.932562 kernel: Asymmetric key parser 'x509' registered Dec 13 02:07:28.932570 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 02:07:28.932578 kernel: io scheduler mq-deadline registered Dec 13 02:07:28.932586 kernel: io scheduler kyber registered Dec 13 02:07:28.932593 kernel: io scheduler bfq registered Dec 13 02:07:28.932604 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 02:07:28.932675 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 02:07:28.932744 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 02:07:28.932812 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.932893 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 02:07:28.932970 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 02:07:28.933099 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.933200 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 02:07:28.933286 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 02:07:28.933365 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.933450 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 02:07:28.933530 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 02:07:28.933617 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.933700 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 02:07:28.933787 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 02:07:28.933857 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.933928 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 02:07:28.933996 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 02:07:28.935869 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.935987 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 02:07:28.936120 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 02:07:28.936200 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.936273 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 02:07:28.936342 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 02:07:28.936417 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.936429 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 02:07:28.936501 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 02:07:28.936568 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 02:07:28.936635 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 02:07:28.936647 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 02:07:28.936655 kernel: ACPI: button: Power Button [PWRB] Dec 13 02:07:28.936665 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 02:07:28.936739 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 02:07:28.936815 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 02:07:28.936888 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 02:07:28.936900 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:07:28.936908 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 02:07:28.936976 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 02:07:28.936987 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 02:07:28.936995 kernel: thunder_xcv, ver 1.0 Dec 13 02:07:28.937006 kernel: thunder_bgx, ver 1.0 Dec 13 02:07:28.937014 kernel: nicpf, ver 1.0 Dec 13 02:07:28.937021 kernel: nicvf, ver 1.0 Dec 13 02:07:28.937119 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 02:07:28.937187 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T02:07:28 UTC (1734055648) Dec 13 02:07:28.937198 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:07:28.937206 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 02:07:28.937214 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 02:07:28.937225 kernel: watchdog: Hard watchdog permanently disabled Dec 13 02:07:28.937233 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:07:28.937241 kernel: Segment Routing with IPv6 Dec 13 02:07:28.937248 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:07:28.937256 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:07:28.937264 kernel: Key type dns_resolver registered Dec 13 02:07:28.937272 kernel: registered taskstats version 1 Dec 13 02:07:28.937280 kernel: Loading compiled-in X.509 certificates Dec 13 02:07:28.937288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 02:07:28.937297 kernel: Key type .fscrypt registered Dec 13 02:07:28.937305 kernel: Key type fscrypt-provisioning registered Dec 13 02:07:28.937312 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:07:28.937320 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:07:28.937328 kernel: ima: No architecture policies found Dec 13 02:07:28.937335 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 02:07:28.937343 kernel: clk: Disabling unused clocks Dec 13 02:07:28.937351 kernel: Freeing unused kernel memory: 39360K Dec 13 02:07:28.937359 kernel: Run /init as init process Dec 13 02:07:28.937368 kernel: with arguments: Dec 13 02:07:28.937376 kernel: /init Dec 13 02:07:28.937384 kernel: with environment: Dec 13 02:07:28.937391 kernel: HOME=/ Dec 13 02:07:28.937399 kernel: TERM=linux Dec 13 02:07:28.937406 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:07:28.937416 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:07:28.937426 systemd[1]: Detected virtualization kvm. Dec 13 02:07:28.937436 systemd[1]: Detected architecture arm64. Dec 13 02:07:28.937443 systemd[1]: Running in initrd. Dec 13 02:07:28.937452 systemd[1]: No hostname configured, using default hostname. Dec 13 02:07:28.937460 systemd[1]: Hostname set to . Dec 13 02:07:28.937468 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:28.937476 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:07:28.937485 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:07:28.937495 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:07:28.937503 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 02:07:28.937512 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:07:28.937520 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 02:07:28.937531 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 02:07:28.937541 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 02:07:28.937549 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 02:07:28.937559 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:07:28.937568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:07:28.937576 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:07:28.937584 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:07:28.937592 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:07:28.937600 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:07:28.937609 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:07:28.937617 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:07:28.937625 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 02:07:28.937635 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 02:07:28.937643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:07:28.937651 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:07:28.937660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:07:28.937668 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:07:28.937676 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 02:07:28.937684 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:07:28.937692 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 02:07:28.937702 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:07:28.937710 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:07:28.937718 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:07:28.937727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:28.937735 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 02:07:28.937744 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:07:28.937773 systemd-journald[236]: Collecting audit messages is disabled. Dec 13 02:07:28.937797 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:07:28.937806 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 02:07:28.937817 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 02:07:28.937826 systemd-journald[236]: Journal started Dec 13 02:07:28.937845 systemd-journald[236]: Runtime Journal (/run/log/journal/b389746876f54895a4bfcd60b9ebe5ba) is 8.0M, max 76.5M, 68.5M free. Dec 13 02:07:28.926111 systemd-modules-load[237]: Inserted module 'overlay' Dec 13 02:07:28.943074 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:07:28.945654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:07:28.945693 kernel: Bridge firewalling registered Dec 13 02:07:28.945777 systemd-modules-load[237]: Inserted module 'br_netfilter' Dec 13 02:07:28.947495 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:07:28.948562 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:07:28.950233 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:28.959274 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:07:28.962221 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:07:28.964251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:07:28.966103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:07:28.979387 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:07:28.985817 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:28.990240 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 02:07:28.991000 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:07:28.997386 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:07:29.018739 dracut-cmdline[274]: dracut-dracut-053 Dec 13 02:07:29.022437 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 02:07:29.038683 systemd-resolved[276]: Positive Trust Anchors: Dec 13 02:07:29.038698 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:29.038729 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:07:29.044245 systemd-resolved[276]: Defaulting to hostname 'linux'. Dec 13 02:07:29.045246 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:07:29.046312 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:07:29.117100 kernel: SCSI subsystem initialized Dec 13 02:07:29.122091 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:07:29.129476 kernel: iscsi: registered transport (tcp) Dec 13 02:07:29.145146 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:07:29.145298 kernel: QLogic iSCSI HBA Driver Dec 13 02:07:29.189702 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 02:07:29.196307 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 02:07:29.220790 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:07:29.220860 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:07:29.220874 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 02:07:29.274111 kernel: raid6: neonx8 gen() 15682 MB/s Dec 13 02:07:29.291114 kernel: raid6: neonx4 gen() 15539 MB/s Dec 13 02:07:29.308124 kernel: raid6: neonx2 gen() 13182 MB/s Dec 13 02:07:29.325100 kernel: raid6: neonx1 gen() 10325 MB/s Dec 13 02:07:29.342115 kernel: raid6: int64x8 gen() 6874 MB/s Dec 13 02:07:29.359107 kernel: raid6: int64x4 gen() 7229 MB/s Dec 13 02:07:29.376120 kernel: raid6: int64x2 gen() 6039 MB/s Dec 13 02:07:29.393118 kernel: raid6: int64x1 gen() 5008 MB/s Dec 13 02:07:29.393195 kernel: raid6: using algorithm neonx8 gen() 15682 MB/s Dec 13 02:07:29.410143 kernel: raid6: .... xor() 11790 MB/s, rmw enabled Dec 13 02:07:29.410230 kernel: raid6: using neon recovery algorithm Dec 13 02:07:29.415096 kernel: xor: measuring software checksum speed Dec 13 02:07:29.415150 kernel: 8regs : 19797 MB/sec Dec 13 02:07:29.415176 kernel: 32regs : 19313 MB/sec Dec 13 02:07:29.415198 kernel: arm64_neon : 24074 MB/sec Dec 13 02:07:29.416079 kernel: xor: using function: arm64_neon (24074 MB/sec) Dec 13 02:07:29.467105 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 02:07:29.481230 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:07:29.489289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:07:29.502540 systemd-udevd[458]: Using default interface naming scheme 'v255'. Dec 13 02:07:29.505909 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:07:29.518222 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 02:07:29.536961 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Dec 13 02:07:29.568873 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:07:29.573247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:07:29.625300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:07:29.634236 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 02:07:29.652389 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 02:07:29.653573 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:07:29.656016 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:07:29.656588 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:07:29.663262 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 02:07:29.687451 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:07:29.722079 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:07:29.782173 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 02:07:29.782293 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 02:07:29.790519 kernel: ACPI: bus type USB registered Dec 13 02:07:29.790615 kernel: usbcore: registered new interface driver usbfs Dec 13 02:07:29.795386 kernel: usbcore: registered new interface driver hub Dec 13 02:07:29.801074 kernel: usbcore: registered new device driver usb Dec 13 02:07:29.828461 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:07:29.828579 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:29.830712 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:07:29.831272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:29.831417 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:29.836326 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 02:07:29.839463 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 02:07:29.839580 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:07:29.839591 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 02:07:29.832670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:29.838311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:29.859078 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:07:29.864980 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 02:07:29.866276 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 02:07:29.866386 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 02:07:29.866480 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 02:07:29.866572 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 02:07:29.866665 kernel: hub 1-0:1.0: USB hub found Dec 13 02:07:29.866787 kernel: hub 1-0:1.0: 4 ports detected Dec 13 02:07:29.866880 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 02:07:29.867076 kernel: hub 2-0:1.0: USB hub found Dec 13 02:07:29.867243 kernel: hub 2-0:1.0: 4 ports detected Dec 13 02:07:29.864117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:29.871444 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 02:07:29.879035 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 02:07:29.879179 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 02:07:29.879263 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 02:07:29.879347 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:07:29.879426 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:07:29.879437 kernel: GPT:17805311 != 80003071 Dec 13 02:07:29.879446 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:07:29.879462 kernel: GPT:17805311 != 80003071 Dec 13 02:07:29.879471 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:07:29.879480 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:07:29.879490 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 02:07:29.870387 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 02:07:29.898413 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:29.925157 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (506) Dec 13 02:07:29.925222 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (522) Dec 13 02:07:29.933042 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 02:07:29.944147 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 02:07:29.949143 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 02:07:29.949754 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 02:07:29.957418 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:07:29.967317 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 02:07:29.975125 disk-uuid[575]: Primary Header is updated. Dec 13 02:07:29.975125 disk-uuid[575]: Secondary Entries is updated. Dec 13 02:07:29.975125 disk-uuid[575]: Secondary Header is updated. Dec 13 02:07:29.981041 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:07:30.106836 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 02:07:30.347177 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 02:07:30.484668 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 02:07:30.484782 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 02:07:30.486117 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 02:07:30.540347 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 02:07:30.541927 kernel: usbcore: registered new interface driver usbhid Dec 13 02:07:30.541966 kernel: usbhid: USB HID core driver Dec 13 02:07:30.994222 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:07:30.998572 disk-uuid[576]: The operation has completed successfully. Dec 13 02:07:31.052194 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:07:31.052292 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 02:07:31.062948 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 02:07:31.066646 sh[593]: Success Dec 13 02:07:31.086157 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 02:07:31.137619 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 02:07:31.146913 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 02:07:31.151514 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 02:07:31.170540 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 02:07:31.170617 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:31.170636 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 02:07:31.172878 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 02:07:31.172924 kernel: BTRFS info (device dm-0): using free space tree Dec 13 02:07:31.180087 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 02:07:31.183222 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 02:07:31.183866 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 02:07:31.194351 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 02:07:31.197369 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 02:07:31.209094 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:31.209158 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:31.209171 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:07:31.212215 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:07:31.212267 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:07:31.223256 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:31.224085 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:07:31.230524 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 02:07:31.238350 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 02:07:31.332001 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:07:31.344330 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:07:31.372864 systemd-networkd[779]: lo: Link UP Dec 13 02:07:31.373512 systemd-networkd[779]: lo: Gained carrier Dec 13 02:07:31.375236 ignition[676]: Ignition 2.19.0 Dec 13 02:07:31.375249 ignition[676]: Stage: fetch-offline Dec 13 02:07:31.375284 ignition[676]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:31.375293 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:31.377309 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:07:31.375449 ignition[676]: parsed url from cmdline: "" Dec 13 02:07:31.377761 systemd-networkd[779]: Enumeration completed Dec 13 02:07:31.375452 ignition[676]: no config URL provided Dec 13 02:07:31.379169 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:07:31.375456 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:07:31.379474 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:31.375462 ignition[676]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:07:31.379476 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:31.375469 ignition[676]: failed to fetch config: resource requires networking Dec 13 02:07:31.381303 systemd[1]: Reached target network.target - Network. Dec 13 02:07:31.375646 ignition[676]: Ignition finished successfully Dec 13 02:07:31.381581 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:31.381583 systemd-networkd[779]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:31.382201 systemd-networkd[779]: eth0: Link UP Dec 13 02:07:31.382205 systemd-networkd[779]: eth0: Gained carrier Dec 13 02:07:31.382212 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:31.388445 systemd-networkd[779]: eth1: Link UP Dec 13 02:07:31.388448 systemd-networkd[779]: eth1: Gained carrier Dec 13 02:07:31.388458 systemd-networkd[779]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:31.390850 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 02:07:31.404774 ignition[783]: Ignition 2.19.0 Dec 13 02:07:31.404784 ignition[783]: Stage: fetch Dec 13 02:07:31.404960 ignition[783]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:31.404971 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:31.405085 ignition[783]: parsed url from cmdline: "" Dec 13 02:07:31.405089 ignition[783]: no config URL provided Dec 13 02:07:31.405093 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:07:31.405101 ignition[783]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:07:31.405121 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 02:07:31.405760 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 02:07:31.422193 systemd-networkd[779]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:31.553188 systemd-networkd[779]: eth0: DHCPv4 address 78.47.95.53/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:07:31.605928 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 02:07:31.613420 ignition[783]: GET result: OK Dec 13 02:07:31.613565 ignition[783]: parsing config with SHA512: e0d380fe2ad923639e07faad5c2c253c8b414a907dea242a9f446a9abd8247ae63d176fe76e794a14858e4f30e8e3ae2c4969bbd75772969d9c7130b635eabe4 Dec 13 02:07:31.619195 unknown[783]: fetched base config from "system" Dec 13 02:07:31.619213 unknown[783]: fetched base config from "system" Dec 13 02:07:31.619704 ignition[783]: fetch: fetch complete Dec 13 02:07:31.619221 unknown[783]: fetched user config from "hetzner" Dec 13 02:07:31.619710 ignition[783]: fetch: fetch passed Dec 13 02:07:31.621714 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 02:07:31.619759 ignition[783]: Ignition finished successfully Dec 13 02:07:31.630280 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 02:07:31.652451 ignition[790]: Ignition 2.19.0 Dec 13 02:07:31.652469 ignition[790]: Stage: kargs Dec 13 02:07:31.652731 ignition[790]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:31.652746 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:31.655877 ignition[790]: kargs: kargs passed Dec 13 02:07:31.655933 ignition[790]: Ignition finished successfully Dec 13 02:07:31.658229 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 02:07:31.663435 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 02:07:31.676509 ignition[796]: Ignition 2.19.0 Dec 13 02:07:31.676519 ignition[796]: Stage: disks Dec 13 02:07:31.676695 ignition[796]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:31.676706 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:31.677618 ignition[796]: disks: disks passed Dec 13 02:07:31.677664 ignition[796]: Ignition finished successfully Dec 13 02:07:31.679840 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 02:07:31.680707 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 02:07:31.681394 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 02:07:31.682339 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:07:31.683300 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:07:31.684335 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:07:31.690255 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 02:07:31.710181 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 02:07:31.714514 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 02:07:31.728295 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 02:07:31.780079 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 02:07:31.780343 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 02:07:31.781283 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 02:07:31.795203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:07:31.798704 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 02:07:31.800882 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 02:07:31.803724 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:07:31.803761 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:07:31.810084 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Dec 13 02:07:31.811618 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:31.811654 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:31.812767 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 02:07:31.814161 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:07:31.815234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 02:07:31.824089 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:07:31.824136 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:07:31.823466 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:07:31.892340 coreos-metadata[814]: Dec 13 02:07:31.892 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 02:07:31.894546 coreos-metadata[814]: Dec 13 02:07:31.894 INFO Fetch successful Dec 13 02:07:31.894546 coreos-metadata[814]: Dec 13 02:07:31.894 INFO wrote hostname ci-4081-2-1-6-b597ddf835 to /sysroot/etc/hostname Dec 13 02:07:31.897115 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:07:31.898362 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:07:31.903376 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:07:31.909002 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:07:31.913642 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:07:32.015211 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 02:07:32.022217 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 02:07:32.024374 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 02:07:32.038111 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:32.065927 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 02:07:32.068896 ignition[930]: INFO : Ignition 2.19.0 Dec 13 02:07:32.070138 ignition[930]: INFO : Stage: mount Dec 13 02:07:32.070138 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:32.070138 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:32.071573 ignition[930]: INFO : mount: mount passed Dec 13 02:07:32.071573 ignition[930]: INFO : Ignition finished successfully Dec 13 02:07:32.073692 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 02:07:32.079194 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 02:07:32.169893 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 02:07:32.177292 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 02:07:32.188617 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Dec 13 02:07:32.188692 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 02:07:32.188711 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 02:07:32.189379 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:07:32.192074 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:07:32.192114 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 02:07:32.194776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 02:07:32.219094 ignition[958]: INFO : Ignition 2.19.0 Dec 13 02:07:32.219094 ignition[958]: INFO : Stage: files Dec 13 02:07:32.219094 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:32.219094 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:32.221995 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:07:32.221995 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:07:32.221995 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:07:32.225153 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:07:32.226106 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:07:32.227027 unknown[958]: wrote ssh authorized keys file for user: core Dec 13 02:07:32.227826 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:07:32.230268 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 02:07:32.230268 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 02:07:32.315976 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 02:07:32.517451 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 02:07:32.517451 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:32.522083 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 02:07:32.605272 systemd-networkd[779]: eth1: Gained IPv6LL Dec 13 02:07:32.797333 systemd-networkd[779]: eth0: Gained IPv6LL Dec 13 02:07:33.098823 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 02:07:33.339345 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 02:07:33.339345 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 02:07:33.342167 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:07:33.345024 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:07:33.345024 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:33.345024 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:07:33.345024 ignition[958]: INFO : files: files passed Dec 13 02:07:33.345024 ignition[958]: INFO : Ignition finished successfully Dec 13 02:07:33.345788 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 02:07:33.351598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 02:07:33.356401 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 02:07:33.359817 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:07:33.360386 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 02:07:33.374558 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:33.374558 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:33.377730 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:07:33.379526 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:07:33.381318 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 02:07:33.390322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 02:07:33.417173 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:07:33.417315 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 02:07:33.419617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 02:07:33.421667 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 02:07:33.422553 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 02:07:33.430303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 02:07:33.444446 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:07:33.450320 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 02:07:33.466923 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:07:33.467719 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:07:33.468821 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 02:07:33.469880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:07:33.470007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 02:07:33.471440 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 02:07:33.472000 systemd[1]: Stopped target basic.target - Basic System. Dec 13 02:07:33.473120 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 02:07:33.474180 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 02:07:33.475192 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 02:07:33.476268 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 02:07:33.477349 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 02:07:33.478518 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 02:07:33.479621 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 02:07:33.480616 systemd[1]: Stopped target swap.target - Swaps. Dec 13 02:07:33.481467 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:07:33.481586 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 02:07:33.482767 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:07:33.483402 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:07:33.484365 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 02:07:33.484439 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:07:33.485420 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:07:33.485534 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 02:07:33.486982 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:07:33.487115 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 02:07:33.488180 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:07:33.488268 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 02:07:33.489326 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:07:33.489417 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 02:07:33.496299 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 02:07:33.496779 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:07:33.496902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:07:33.501284 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 02:07:33.504580 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:07:33.504729 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:07:33.505889 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:07:33.505983 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 02:07:33.515991 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:07:33.517134 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 02:07:33.523697 ignition[1011]: INFO : Ignition 2.19.0 Dec 13 02:07:33.526636 ignition[1011]: INFO : Stage: umount Dec 13 02:07:33.526636 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:07:33.526636 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 02:07:33.526636 ignition[1011]: INFO : umount: umount passed Dec 13 02:07:33.526636 ignition[1011]: INFO : Ignition finished successfully Dec 13 02:07:33.526404 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:07:33.528388 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:07:33.530122 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 02:07:33.532247 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:07:33.532297 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 02:07:33.533665 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:07:33.533708 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 02:07:33.537041 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:07:33.537106 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 02:07:33.538030 systemd[1]: Stopped target network.target - Network. Dec 13 02:07:33.539007 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:07:33.539138 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 02:07:33.540221 systemd[1]: Stopped target paths.target - Path Units. Dec 13 02:07:33.542846 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:07:33.542920 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:07:33.543932 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 02:07:33.544935 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 02:07:33.546029 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:07:33.546103 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 02:07:33.546634 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:07:33.546667 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 02:07:33.549226 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:07:33.549295 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 02:07:33.550593 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 02:07:33.550634 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 02:07:33.552784 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 02:07:33.557265 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 02:07:33.563177 systemd-networkd[779]: eth0: DHCPv6 lease lost Dec 13 02:07:33.565978 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:07:33.566122 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 02:07:33.568156 systemd-networkd[779]: eth1: DHCPv6 lease lost Dec 13 02:07:33.571144 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 02:07:33.571217 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:07:33.572788 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:07:33.574192 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 02:07:33.575516 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:07:33.575589 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:07:33.583421 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 02:07:33.584532 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:07:33.584648 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 02:07:33.587204 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:07:33.587295 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:07:33.590372 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:07:33.590464 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 02:07:33.592468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:07:33.596813 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:07:33.596990 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 02:07:33.605159 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:07:33.605266 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 02:07:33.618527 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:07:33.618708 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:07:33.620345 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:07:33.620427 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 02:07:33.621477 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:07:33.621517 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:07:33.623241 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:07:33.623301 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 02:07:33.625473 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:07:33.625537 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 02:07:33.627514 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:07:33.627597 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 02:07:33.638331 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 02:07:33.638852 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:07:33.638963 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:07:33.640800 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:33.640854 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:33.641989 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:07:33.642128 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 02:07:33.649233 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:07:33.649900 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 02:07:33.652394 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 02:07:33.659275 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 02:07:33.666747 systemd[1]: Switching root. Dec 13 02:07:33.695647 systemd-journald[236]: Journal stopped Dec 13 02:07:34.547352 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Dec 13 02:07:34.547423 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:07:34.547436 kernel: SELinux: policy capability open_perms=1 Dec 13 02:07:34.547446 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:07:34.547458 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:07:34.547468 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:07:34.547477 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:07:34.547486 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:07:34.547496 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:07:34.547506 kernel: audit: type=1403 audit(1734055653.826:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:07:34.547516 systemd[1]: Successfully loaded SELinux policy in 35.331ms. Dec 13 02:07:34.547536 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.324ms. Dec 13 02:07:34.547548 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 02:07:34.547561 systemd[1]: Detected virtualization kvm. Dec 13 02:07:34.547571 systemd[1]: Detected architecture arm64. Dec 13 02:07:34.547581 systemd[1]: Detected first boot. Dec 13 02:07:34.547591 systemd[1]: Hostname set to . Dec 13 02:07:34.547601 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:07:34.547612 zram_generator::config[1054]: No configuration found. Dec 13 02:07:34.547623 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:07:34.547633 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:07:34.547648 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 02:07:34.547659 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:07:34.547670 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 02:07:34.547680 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 02:07:34.547691 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 02:07:34.547701 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 02:07:34.547712 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 02:07:34.547723 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 02:07:34.547738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 02:07:34.547749 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 02:07:34.547759 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 02:07:34.547769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 02:07:34.547779 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 02:07:34.547790 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 02:07:34.547800 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 02:07:34.547811 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 02:07:34.547821 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 02:07:34.547832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 02:07:34.547842 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 02:07:34.547853 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 02:07:34.547864 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 02:07:34.547874 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 02:07:34.547884 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 02:07:34.547899 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 02:07:34.547910 systemd[1]: Reached target slices.target - Slice Units. Dec 13 02:07:34.547921 systemd[1]: Reached target swap.target - Swaps. Dec 13 02:07:34.547931 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 02:07:34.547941 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 02:07:34.547952 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 02:07:34.547966 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 02:07:34.547976 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 02:07:34.547987 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 02:07:34.547997 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 02:07:34.548011 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 02:07:34.548022 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 02:07:34.548032 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 02:07:34.548042 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 02:07:34.548065 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 02:07:34.548080 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:07:34.548090 systemd[1]: Reached target machines.target - Containers. Dec 13 02:07:34.548101 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 02:07:34.548113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:34.548124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 02:07:34.548134 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 02:07:34.548145 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:07:34.548158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:07:34.548171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:07:34.548183 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 02:07:34.548194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:07:34.548204 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:07:34.548214 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:07:34.548225 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 02:07:34.548235 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:07:34.548246 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:07:34.548257 kernel: fuse: init (API version 7.39) Dec 13 02:07:34.548268 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 02:07:34.548279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 02:07:34.548289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 02:07:34.548300 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 02:07:34.548310 kernel: ACPI: bus type drm_connector registered Dec 13 02:07:34.548320 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 02:07:34.548330 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:07:34.548341 systemd[1]: Stopped verity-setup.service. Dec 13 02:07:34.548351 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 02:07:34.548363 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 02:07:34.548373 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 02:07:34.548384 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 02:07:34.548394 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 02:07:34.548404 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 02:07:34.548416 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 02:07:34.548426 kernel: loop: module loaded Dec 13 02:07:34.548436 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:07:34.548447 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 02:07:34.548482 systemd-journald[1124]: Collecting audit messages is disabled. Dec 13 02:07:34.548508 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:34.548520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:07:34.548533 systemd-journald[1124]: Journal started Dec 13 02:07:34.548555 systemd-journald[1124]: Runtime Journal (/run/log/journal/b389746876f54895a4bfcd60b9ebe5ba) is 8.0M, max 76.5M, 68.5M free. Dec 13 02:07:34.291949 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:07:34.313724 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 02:07:34.314312 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:07:34.551838 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 02:07:34.552598 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:34.554082 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:07:34.554965 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:34.555186 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:07:34.556144 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:07:34.556333 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 02:07:34.558335 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:34.558481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:07:34.559872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 02:07:34.561398 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 02:07:34.562368 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 02:07:34.580544 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 02:07:34.589262 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 02:07:34.597262 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 02:07:34.599187 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:07:34.599228 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 02:07:34.603409 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 02:07:34.606341 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 02:07:34.616608 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 02:07:34.617495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:34.623591 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 02:07:34.629854 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 02:07:34.632262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:34.638651 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 02:07:34.641021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:07:34.644344 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 02:07:34.648634 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 02:07:34.653386 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 02:07:34.655869 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 02:07:34.657300 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 02:07:34.661085 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 02:07:34.677671 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 02:07:34.678610 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 02:07:34.680165 systemd-journald[1124]: Time spent on flushing to /var/log/journal/b389746876f54895a4bfcd60b9ebe5ba is 25.279ms for 1124 entries. Dec 13 02:07:34.680165 systemd-journald[1124]: System Journal (/var/log/journal/b389746876f54895a4bfcd60b9ebe5ba) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:07:34.735070 systemd-journald[1124]: Received client request to flush runtime journal. Dec 13 02:07:34.735120 kernel: loop0: detected capacity change from 0 to 194096 Dec 13 02:07:34.690311 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 02:07:34.714152 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 02:07:34.716384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 02:07:34.726401 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 02:07:34.745136 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 02:07:34.760756 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:07:34.771383 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 02:07:34.775073 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 02:07:34.789173 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:07:34.792402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 02:07:34.794458 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:07:34.796277 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 02:07:34.826080 kernel: loop1: detected capacity change from 0 to 8 Dec 13 02:07:34.833786 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Dec 13 02:07:34.833803 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Dec 13 02:07:34.842098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 02:07:34.853177 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 02:07:34.898629 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 02:07:34.931087 kernel: loop4: detected capacity change from 0 to 194096 Dec 13 02:07:34.950191 kernel: loop5: detected capacity change from 0 to 8 Dec 13 02:07:34.954081 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 02:07:34.969708 kernel: loop7: detected capacity change from 0 to 114328 Dec 13 02:07:34.987434 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 02:07:34.987886 (sd-merge)[1194]: Merged extensions into '/usr'. Dec 13 02:07:34.995005 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 02:07:34.995027 systemd[1]: Reloading... Dec 13 02:07:35.098135 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:07:35.136081 zram_generator::config[1216]: No configuration found. Dec 13 02:07:35.240210 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:35.285920 systemd[1]: Reloading finished in 289 ms. Dec 13 02:07:35.316701 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 02:07:35.317693 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 02:07:35.329246 systemd[1]: Starting ensure-sysext.service... Dec 13 02:07:35.333205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 02:07:35.355217 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Dec 13 02:07:35.355242 systemd[1]: Reloading... Dec 13 02:07:35.371563 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:07:35.371866 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 02:07:35.372616 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:07:35.372843 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 02:07:35.372887 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Dec 13 02:07:35.377398 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:07:35.377412 systemd-tmpfiles[1258]: Skipping /boot Dec 13 02:07:35.384908 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 02:07:35.384931 systemd-tmpfiles[1258]: Skipping /boot Dec 13 02:07:35.428084 zram_generator::config[1284]: No configuration found. Dec 13 02:07:35.531397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:07:35.577293 systemd[1]: Reloading finished in 221 ms. Dec 13 02:07:35.604969 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 02:07:35.610547 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 02:07:35.624287 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:07:35.632291 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 02:07:35.637023 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 02:07:35.649310 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 02:07:35.653321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 02:07:35.662340 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 02:07:35.672430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:35.677404 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:07:35.686715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:07:35.691604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:07:35.692203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:35.695139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:35.695300 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:35.697300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:35.705901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 02:07:35.707270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:35.719257 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 02:07:35.733098 systemd[1]: Finished ensure-sysext.service. Dec 13 02:07:35.733933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:35.735247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:07:35.749531 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 02:07:35.752813 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Dec 13 02:07:35.756804 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 02:07:35.764676 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 02:07:35.769202 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 02:07:35.770465 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 02:07:35.771329 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:07:35.771819 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 02:07:35.773984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:35.774149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:07:35.779700 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:35.779869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:07:35.782022 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:35.783464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:07:35.783507 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:35.796222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 02:07:35.802304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 02:07:35.821653 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 02:07:35.838686 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 02:07:35.850784 augenrules[1380]: No rules Dec 13 02:07:35.855516 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:07:35.923013 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 02:07:35.923883 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 02:07:35.931342 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 02:07:35.977114 systemd-networkd[1361]: lo: Link UP Dec 13 02:07:35.977125 systemd-networkd[1361]: lo: Gained carrier Dec 13 02:07:35.977837 systemd-networkd[1361]: Enumeration completed Dec 13 02:07:35.978194 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 02:07:35.982272 systemd-resolved[1328]: Positive Trust Anchors: Dec 13 02:07:35.982571 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:07:35.982663 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 02:07:35.987220 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 02:07:35.994520 systemd-resolved[1328]: Using system hostname 'ci-4081-2-1-6-b597ddf835'. Dec 13 02:07:35.997549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 02:07:35.998229 systemd[1]: Reached target network.target - Network. Dec 13 02:07:35.999300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 02:07:36.000073 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1392) Dec 13 02:07:36.020132 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1392) Dec 13 02:07:36.046462 systemd-networkd[1361]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:36.047745 systemd-networkd[1361]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:36.048536 systemd-networkd[1361]: eth1: Link UP Dec 13 02:07:36.048539 systemd-networkd[1361]: eth1: Gained carrier Dec 13 02:07:36.048556 systemd-networkd[1361]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:36.055748 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:36.055856 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:07:36.057179 systemd-networkd[1361]: eth0: Link UP Dec 13 02:07:36.057422 systemd-networkd[1361]: eth0: Gained carrier Dec 13 02:07:36.057444 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 02:07:36.083843 systemd-networkd[1361]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:07:36.084638 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Dec 13 02:07:36.095095 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1391) Dec 13 02:07:36.119093 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:07:36.129247 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 02:07:36.129372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 02:07:36.133254 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 02:07:36.136647 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 02:07:36.140884 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 02:07:36.141584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 02:07:36.141627 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:07:36.141956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:07:36.144177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 02:07:36.152315 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 02:07:36.152398 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 02:07:36.152411 kernel: [drm] features: -context_init Dec 13 02:07:36.160364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:07:36.160887 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 02:07:36.165078 kernel: [drm] number of scanouts: 1 Dec 13 02:07:36.165149 kernel: [drm] number of cap sets: 0 Dec 13 02:07:36.172134 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:07:36.172331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 02:07:36.173802 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:07:36.173888 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 02:07:36.181077 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 02:07:36.186061 systemd-networkd[1361]: eth0: DHCPv4 address 78.47.95.53/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 02:07:36.186672 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Dec 13 02:07:36.198354 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 02:07:36.200812 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 02:07:36.203124 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 02:07:36.218894 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 02:07:36.237390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:36.243258 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 02:07:36.244434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:07:36.244720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:36.251383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 02:07:36.312808 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 02:07:36.368874 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 02:07:36.376319 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 02:07:36.393124 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:36.423311 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 02:07:36.424678 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 02:07:36.425690 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 02:07:36.426720 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 02:07:36.427935 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 02:07:36.429467 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 02:07:36.430502 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 02:07:36.431534 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 02:07:36.432266 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:07:36.432299 systemd[1]: Reached target paths.target - Path Units. Dec 13 02:07:36.432808 systemd[1]: Reached target timers.target - Timer Units. Dec 13 02:07:36.434964 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 02:07:36.437291 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 02:07:36.443115 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 02:07:36.444990 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 02:07:36.446191 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 02:07:36.446786 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 02:07:36.447336 systemd[1]: Reached target basic.target - Basic System. Dec 13 02:07:36.447824 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:07:36.447850 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 02:07:36.453417 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 02:07:36.457320 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 02:07:36.460916 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:07:36.467321 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 02:07:36.473247 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 02:07:36.479388 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 02:07:36.479927 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 02:07:36.482150 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 02:07:36.484398 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 02:07:36.490312 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 02:07:36.497308 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 02:07:36.504103 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 02:07:36.510638 dbus-daemon[1444]: [system] SELinux support is enabled Dec 13 02:07:36.511165 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 02:07:36.512422 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:07:36.512897 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:07:36.516294 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 02:07:36.520213 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 02:07:36.521578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 02:07:36.527148 coreos-metadata[1443]: Dec 13 02:07:36.526 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 02:07:36.529121 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 02:07:36.529873 coreos-metadata[1443]: Dec 13 02:07:36.529 INFO Fetch successful Dec 13 02:07:36.530026 coreos-metadata[1443]: Dec 13 02:07:36.529 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 02:07:36.532342 coreos-metadata[1443]: Dec 13 02:07:36.531 INFO Fetch successful Dec 13 02:07:36.532409 jq[1445]: false Dec 13 02:07:36.548364 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:07:36.548578 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 02:07:36.548893 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:07:36.549048 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 02:07:36.560364 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:07:36.560425 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 02:07:36.563154 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:07:36.563178 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 02:07:36.576166 jq[1456]: true Dec 13 02:07:36.593562 extend-filesystems[1446]: Found loop4 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found loop5 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found loop6 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found loop7 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda1 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda2 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda3 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found usr Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda4 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda6 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda7 Dec 13 02:07:36.594666 extend-filesystems[1446]: Found sda9 Dec 13 02:07:36.594666 extend-filesystems[1446]: Checking size of /dev/sda9 Dec 13 02:07:36.622547 tar[1460]: linux-arm64/helm Dec 13 02:07:36.624686 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:07:36.624887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 02:07:36.627983 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 02:07:36.634810 jq[1474]: true Dec 13 02:07:36.668371 update_engine[1455]: I20241213 02:07:36.665187 1455 main.cc:92] Flatcar Update Engine starting Dec 13 02:07:36.679873 systemd[1]: Started update-engine.service - Update Engine. Dec 13 02:07:36.684539 update_engine[1455]: I20241213 02:07:36.680153 1455 update_check_scheduler.cc:74] Next update check in 2m48s Dec 13 02:07:36.682231 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 02:07:36.684494 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 02:07:36.685170 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 02:07:36.686828 extend-filesystems[1446]: Resized partition /dev/sda9 Dec 13 02:07:36.699741 systemd-logind[1454]: New seat seat0. Dec 13 02:07:36.703473 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Dec 13 02:07:36.708796 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 02:07:36.708816 systemd-logind[1454]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 02:07:36.709035 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 02:07:36.711590 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 02:07:36.747330 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1358) Dec 13 02:07:36.815146 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:07:36.819109 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 02:07:36.830496 systemd[1]: Starting sshkeys.service... Dec 13 02:07:36.890501 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 02:07:36.907268 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 02:07:36.913411 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 02:07:36.914571 extend-filesystems[1499]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:07:36.914571 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 02:07:36.914571 extend-filesystems[1499]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 02:07:36.916701 extend-filesystems[1446]: Resized filesystem in /dev/sda9 Dec 13 02:07:36.916701 extend-filesystems[1446]: Found sr0 Dec 13 02:07:36.915321 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:07:36.917136 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 02:07:36.944502 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:07:36.968066 coreos-metadata[1525]: Dec 13 02:07:36.966 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 02:07:36.974386 coreos-metadata[1525]: Dec 13 02:07:36.974 INFO Fetch successful Dec 13 02:07:36.982175 unknown[1525]: wrote ssh authorized keys file for user: core Dec 13 02:07:37.013062 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:07:37.017098 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 02:07:37.021993 systemd[1]: Finished sshkeys.service. Dec 13 02:07:37.057085 containerd[1477]: time="2024-12-13T02:07:37.055488040Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 02:07:37.104364 containerd[1477]: time="2024-12-13T02:07:37.104253120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:37.106126 containerd[1477]: time="2024-12-13T02:07:37.106083040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:37.106234 containerd[1477]: time="2024-12-13T02:07:37.106220640Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:07:37.106313 containerd[1477]: time="2024-12-13T02:07:37.106299800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:07:37.106513 containerd[1477]: time="2024-12-13T02:07:37.106494280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 02:07:37.106583 containerd[1477]: time="2024-12-13T02:07:37.106570120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:37.106695 containerd[1477]: time="2024-12-13T02:07:37.106678080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:37.106745 containerd[1477]: time="2024-12-13T02:07:37.106732800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:37.107044 containerd[1477]: time="2024-12-13T02:07:37.107021200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:37.107131 containerd[1477]: time="2024-12-13T02:07:37.107118000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:37.107249 containerd[1477]: time="2024-12-13T02:07:37.107232440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:37.107312 containerd[1477]: time="2024-12-13T02:07:37.107299120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:37.107686 containerd[1477]: time="2024-12-13T02:07:37.107485760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:37.107808 containerd[1477]: time="2024-12-13T02:07:37.107790520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:07:37.108083 containerd[1477]: time="2024-12-13T02:07:37.108035880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:07:37.108155 containerd[1477]: time="2024-12-13T02:07:37.108141720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:07:37.108346 containerd[1477]: time="2024-12-13T02:07:37.108272840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:07:37.108346 containerd[1477]: time="2024-12-13T02:07:37.108316920Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:07:37.112688 containerd[1477]: time="2024-12-13T02:07:37.112650000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:07:37.113197 containerd[1477]: time="2024-12-13T02:07:37.112832080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:07:37.113197 containerd[1477]: time="2024-12-13T02:07:37.112924480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 02:07:37.113197 containerd[1477]: time="2024-12-13T02:07:37.112951600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 02:07:37.113197 containerd[1477]: time="2024-12-13T02:07:37.112969360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:07:37.113197 containerd[1477]: time="2024-12-13T02:07:37.113144960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:07:37.113602 containerd[1477]: time="2024-12-13T02:07:37.113578040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:07:37.113771 containerd[1477]: time="2024-12-13T02:07:37.113753680Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 02:07:37.113846 containerd[1477]: time="2024-12-13T02:07:37.113832560Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114126360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114154080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114174040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114186960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114201720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114216800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114229320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114242000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114254240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114276800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114291720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114305480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114323320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114642 containerd[1477]: time="2024-12-13T02:07:37.114336880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114350920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114363480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114377880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114391560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114408360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114420080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114432000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114444920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114462040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114483360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114495280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.114927 containerd[1477]: time="2024-12-13T02:07:37.114524480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:07:37.115353 containerd[1477]: time="2024-12-13T02:07:37.115175600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:07:37.116475 containerd[1477]: time="2024-12-13T02:07:37.115207160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 02:07:37.116475 containerd[1477]: time="2024-12-13T02:07:37.115460720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:07:37.116475 containerd[1477]: time="2024-12-13T02:07:37.115478200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 02:07:37.116475 containerd[1477]: time="2024-12-13T02:07:37.115488280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.116475 containerd[1477]: time="2024-12-13T02:07:37.115501440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 02:07:37.116475 containerd[1477]: time="2024-12-13T02:07:37.115512120Z" level=info msg="NRI interface is disabled by configuration." Dec 13 02:07:37.116475 containerd[1477]: time="2024-12-13T02:07:37.115522480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:07:37.116640 containerd[1477]: time="2024-12-13T02:07:37.115893200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:07:37.116640 containerd[1477]: time="2024-12-13T02:07:37.115952200Z" level=info msg="Connect containerd service" Dec 13 02:07:37.116640 containerd[1477]: time="2024-12-13T02:07:37.115982440Z" level=info msg="using legacy CRI server" Dec 13 02:07:37.116640 containerd[1477]: time="2024-12-13T02:07:37.115989080Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 02:07:37.116640 containerd[1477]: time="2024-12-13T02:07:37.116104160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:07:37.117462 containerd[1477]: time="2024-12-13T02:07:37.117436680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:07:37.117782 containerd[1477]: time="2024-12-13T02:07:37.117741600Z" level=info msg="Start subscribing containerd event" Dec 13 02:07:37.117971 containerd[1477]: time="2024-12-13T02:07:37.117954120Z" level=info msg="Start recovering state" Dec 13 02:07:37.118117 containerd[1477]: time="2024-12-13T02:07:37.118102000Z" level=info msg="Start event monitor" Dec 13 02:07:37.120049 containerd[1477]: time="2024-12-13T02:07:37.118179280Z" level=info msg="Start snapshots syncer" Dec 13 02:07:37.120049 containerd[1477]: time="2024-12-13T02:07:37.118195680Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:07:37.120049 containerd[1477]: time="2024-12-13T02:07:37.118208600Z" level=info msg="Start streaming server" Dec 13 02:07:37.120807 containerd[1477]: time="2024-12-13T02:07:37.120787360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:07:37.120979 containerd[1477]: time="2024-12-13T02:07:37.120955120Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:07:37.121252 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 02:07:37.122614 containerd[1477]: time="2024-12-13T02:07:37.122590360Z" level=info msg="containerd successfully booted in 0.069447s" Dec 13 02:07:37.277344 systemd-networkd[1361]: eth1: Gained IPv6LL Dec 13 02:07:37.278305 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Dec 13 02:07:37.282127 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 02:07:37.283842 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 02:07:37.294304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:07:37.298385 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 02:07:37.335641 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:07:37.361100 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 02:07:37.366612 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 02:07:37.375367 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 02:07:37.397797 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:07:37.398035 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 02:07:37.408138 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 02:07:37.430671 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 02:07:37.439801 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 02:07:37.441882 tar[1460]: linux-arm64/LICENSE Dec 13 02:07:37.441989 tar[1460]: linux-arm64/README.md Dec 13 02:07:37.448454 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 02:07:37.449226 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 02:07:37.472112 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 02:07:37.983216 systemd-networkd[1361]: eth0: Gained IPv6LL Dec 13 02:07:37.983950 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Dec 13 02:07:38.000085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:07:38.002396 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 02:07:38.010300 systemd[1]: Startup finished in 762ms (kernel) + 5.127s (initrd) + 4.219s (userspace) = 10.109s. Dec 13 02:07:38.012419 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:07:38.631813 kubelet[1573]: E1213 02:07:38.631718 1573 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:38.635827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:38.636172 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:48.886574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:07:48.895354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:07:49.002225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:07:49.009024 (kubelet)[1593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:07:49.050927 kubelet[1593]: E1213 02:07:49.050865 1593 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:49.053901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:49.054078 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:07:53.538238 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 02:07:53.543495 systemd[1]: Started sshd@0-78.47.95.53:22-159.223.178.117:43332.service - OpenSSH per-connection server daemon (159.223.178.117:43332). Dec 13 02:07:53.672300 sshd[1603]: Connection closed by 159.223.178.117 port 43332 Dec 13 02:07:53.673474 systemd[1]: sshd@0-78.47.95.53:22-159.223.178.117:43332.service: Deactivated successfully. Dec 13 02:07:59.305115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:07:59.312372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:07:59.426285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:07:59.426935 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:07:59.478779 kubelet[1614]: E1213 02:07:59.478635 1614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:07:59.482994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:07:59.483317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:08.294037 systemd-timesyncd[1347]: Contacted time server 80.153.195.191:123 (2.flatcar.pool.ntp.org). Dec 13 02:08:08.294149 systemd-timesyncd[1347]: Initial clock synchronization to Fri 2024-12-13 02:08:07.960225 UTC. Dec 13 02:08:09.647912 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:08:09.656422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:09.770377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:09.776004 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:09.832300 kubelet[1630]: E1213 02:08:09.832240 1630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:09.834811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:09.835080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:19.897758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 02:08:19.909304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:20.020715 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:20.031465 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:20.073975 kubelet[1646]: E1213 02:08:20.073914 1646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:20.076510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:20.076685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:21.902980 update_engine[1455]: I20241213 02:08:21.902855 1455 update_attempter.cc:509] Updating boot flags... Dec 13 02:08:21.944100 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1663) Dec 13 02:08:21.990229 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1667) Dec 13 02:08:22.039229 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1667) Dec 13 02:08:30.147753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 02:08:30.155339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:30.264554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:30.270471 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:30.316817 kubelet[1683]: E1213 02:08:30.316727 1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:30.320373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:30.320745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:40.397967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 02:08:40.410332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:40.531307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:40.546605 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:40.595114 kubelet[1699]: E1213 02:08:40.594936 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:40.598531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:40.598855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:08:50.647441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 02:08:50.662759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:08:50.777301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:08:50.779312 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:08:50.821469 kubelet[1715]: E1213 02:08:50.821408 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:08:50.824036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:08:50.824355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:00.897932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 02:09:00.904430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:01.039223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:01.052498 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:01.110759 kubelet[1731]: E1213 02:09:01.110710 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:01.114656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:01.114846 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:11.147636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 02:09:11.155403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:11.280757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:11.295500 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:11.342540 kubelet[1747]: E1213 02:09:11.342453 1747 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:11.344670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:11.344792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:21.397968 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 02:09:21.410899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:21.522363 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:21.525317 (kubelet)[1763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:21.574863 kubelet[1763]: E1213 02:09:21.574559 1763 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:21.577488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:21.577676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:31.344773 systemd[1]: Started sshd@1-78.47.95.53:22-147.75.109.163:55204.service - OpenSSH per-connection server daemon (147.75.109.163:55204). Dec 13 02:09:31.647682 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 02:09:31.657437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:31.785217 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:31.795388 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:31.838286 kubelet[1781]: E1213 02:09:31.838228 1781 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:31.840623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:31.840773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:32.324253 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 55204 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:32.328478 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:32.349883 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 02:09:32.356705 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 02:09:32.359709 systemd-logind[1454]: New session 1 of user core. Dec 13 02:09:32.368123 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 02:09:32.374006 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 02:09:32.389838 (systemd)[1791]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:09:32.502680 systemd[1791]: Queued start job for default target default.target. Dec 13 02:09:32.513331 systemd[1791]: Created slice app.slice - User Application Slice. Dec 13 02:09:32.513535 systemd[1791]: Reached target paths.target - Paths. Dec 13 02:09:32.513633 systemd[1791]: Reached target timers.target - Timers. Dec 13 02:09:32.515422 systemd[1791]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 02:09:32.536799 systemd[1791]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 02:09:32.537160 systemd[1791]: Reached target sockets.target - Sockets. Dec 13 02:09:32.537298 systemd[1791]: Reached target basic.target - Basic System. Dec 13 02:09:32.537453 systemd[1791]: Reached target default.target - Main User Target. Dec 13 02:09:32.537601 systemd[1791]: Startup finished in 140ms. Dec 13 02:09:32.537760 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 02:09:32.549416 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 02:09:33.248504 systemd[1]: Started sshd@2-78.47.95.53:22-147.75.109.163:55218.service - OpenSSH per-connection server daemon (147.75.109.163:55218). Dec 13 02:09:34.229142 sshd[1802]: Accepted publickey for core from 147.75.109.163 port 55218 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:34.231379 sshd[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:34.237217 systemd-logind[1454]: New session 2 of user core. Dec 13 02:09:34.248278 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 02:09:34.911352 sshd[1802]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:34.915179 systemd[1]: sshd@2-78.47.95.53:22-147.75.109.163:55218.service: Deactivated successfully. Dec 13 02:09:34.917369 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:09:34.919786 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:09:34.921265 systemd-logind[1454]: Removed session 2. Dec 13 02:09:35.082546 systemd[1]: Started sshd@3-78.47.95.53:22-147.75.109.163:55226.service - OpenSSH per-connection server daemon (147.75.109.163:55226). Dec 13 02:09:36.075212 sshd[1809]: Accepted publickey for core from 147.75.109.163 port 55226 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:36.077620 sshd[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:36.083713 systemd-logind[1454]: New session 3 of user core. Dec 13 02:09:36.093425 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 02:09:36.755820 sshd[1809]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:36.761304 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:09:36.761590 systemd[1]: sshd@3-78.47.95.53:22-147.75.109.163:55226.service: Deactivated successfully. Dec 13 02:09:36.763990 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:09:36.768597 systemd-logind[1454]: Removed session 3. Dec 13 02:09:36.933477 systemd[1]: Started sshd@4-78.47.95.53:22-147.75.109.163:49710.service - OpenSSH per-connection server daemon (147.75.109.163:49710). Dec 13 02:09:37.923950 sshd[1816]: Accepted publickey for core from 147.75.109.163 port 49710 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:37.925886 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:37.932286 systemd-logind[1454]: New session 4 of user core. Dec 13 02:09:37.940448 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 02:09:38.612822 sshd[1816]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:38.618536 systemd[1]: sshd@4-78.47.95.53:22-147.75.109.163:49710.service: Deactivated successfully. Dec 13 02:09:38.620688 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:09:38.622784 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:09:38.624183 systemd-logind[1454]: Removed session 4. Dec 13 02:09:38.790461 systemd[1]: Started sshd@5-78.47.95.53:22-147.75.109.163:49712.service - OpenSSH per-connection server daemon (147.75.109.163:49712). Dec 13 02:09:39.765604 sshd[1823]: Accepted publickey for core from 147.75.109.163 port 49712 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:39.768385 sshd[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:39.774740 systemd-logind[1454]: New session 5 of user core. Dec 13 02:09:39.785386 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 02:09:40.323495 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 02:09:40.323775 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:40.336809 sudo[1826]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:40.498491 sshd[1823]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:40.504743 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:09:40.505008 systemd[1]: sshd@5-78.47.95.53:22-147.75.109.163:49712.service: Deactivated successfully. Dec 13 02:09:40.507326 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:09:40.508353 systemd-logind[1454]: Removed session 5. Dec 13 02:09:40.674585 systemd[1]: Started sshd@6-78.47.95.53:22-147.75.109.163:49714.service - OpenSSH per-connection server daemon (147.75.109.163:49714). Dec 13 02:09:41.673997 sshd[1831]: Accepted publickey for core from 147.75.109.163 port 49714 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:41.676220 sshd[1831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:41.681998 systemd-logind[1454]: New session 6 of user core. Dec 13 02:09:41.688284 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 02:09:41.897798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 02:09:41.905421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:42.034667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:42.042628 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:42.087296 kubelet[1842]: E1213 02:09:42.087239 1842 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:42.089488 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:42.089625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:42.200863 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 02:09:42.202236 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:42.207672 sudo[1852]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:42.214989 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 02:09:42.215458 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:42.244571 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 02:09:42.247653 auditctl[1855]: No rules Dec 13 02:09:42.248207 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 02:09:42.248464 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 02:09:42.252747 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 02:09:42.296865 augenrules[1873]: No rules Dec 13 02:09:42.297462 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 02:09:42.300146 sudo[1851]: pam_unix(sudo:session): session closed for user root Dec 13 02:09:42.460420 sshd[1831]: pam_unix(sshd:session): session closed for user core Dec 13 02:09:42.465325 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:09:42.467542 systemd[1]: sshd@6-78.47.95.53:22-147.75.109.163:49714.service: Deactivated successfully. Dec 13 02:09:42.470867 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:09:42.474142 systemd-logind[1454]: Removed session 6. Dec 13 02:09:42.640410 systemd[1]: Started sshd@7-78.47.95.53:22-147.75.109.163:49728.service - OpenSSH per-connection server daemon (147.75.109.163:49728). Dec 13 02:09:43.625676 sshd[1881]: Accepted publickey for core from 147.75.109.163 port 49728 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:09:43.627480 sshd[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:09:43.632826 systemd-logind[1454]: New session 7 of user core. Dec 13 02:09:43.640315 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 02:09:44.148847 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:09:44.149700 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 02:09:44.479445 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 02:09:44.479515 (dockerd)[1899]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 02:09:44.781540 dockerd[1899]: time="2024-12-13T02:09:44.781114112Z" level=info msg="Starting up" Dec 13 02:09:44.890695 dockerd[1899]: time="2024-12-13T02:09:44.890634588Z" level=info msg="Loading containers: start." Dec 13 02:09:45.012089 kernel: Initializing XFRM netlink socket Dec 13 02:09:45.088983 systemd-networkd[1361]: docker0: Link UP Dec 13 02:09:45.107291 dockerd[1899]: time="2024-12-13T02:09:45.107204933Z" level=info msg="Loading containers: done." Dec 13 02:09:45.126803 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2172247996-merged.mount: Deactivated successfully. Dec 13 02:09:45.129504 dockerd[1899]: time="2024-12-13T02:09:45.129434470Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:09:45.131193 dockerd[1899]: time="2024-12-13T02:09:45.129672400Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 02:09:45.131193 dockerd[1899]: time="2024-12-13T02:09:45.129829487Z" level=info msg="Daemon has completed initialization" Dec 13 02:09:45.164794 dockerd[1899]: time="2024-12-13T02:09:45.164413847Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:09:45.164677 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 02:09:46.369643 containerd[1477]: time="2024-12-13T02:09:46.369377436Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 02:09:47.062245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304721469.mount: Deactivated successfully. Dec 13 02:09:50.409993 containerd[1477]: time="2024-12-13T02:09:50.409840025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:50.411002 containerd[1477]: time="2024-12-13T02:09:50.410971548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864102" Dec 13 02:09:50.411711 containerd[1477]: time="2024-12-13T02:09:50.411662254Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:50.414548 containerd[1477]: time="2024-12-13T02:09:50.414480162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:50.415960 containerd[1477]: time="2024-12-13T02:09:50.415696489Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 4.04627517s" Dec 13 02:09:50.415960 containerd[1477]: time="2024-12-13T02:09:50.415765091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 02:09:50.435505 containerd[1477]: time="2024-12-13T02:09:50.435465324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 02:09:52.147571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Dec 13 02:09:52.153437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:09:52.295312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:09:52.297307 (kubelet)[2110]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:09:52.355485 kubelet[2110]: E1213 02:09:52.355417 2110 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:09:52.358735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:09:52.358909 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:09:52.986827 containerd[1477]: time="2024-12-13T02:09:52.986781136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:52.989078 containerd[1477]: time="2024-12-13T02:09:52.989036658Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900714" Dec 13 02:09:52.990161 containerd[1477]: time="2024-12-13T02:09:52.990135978Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:52.993970 containerd[1477]: time="2024-12-13T02:09:52.993938675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:52.995329 containerd[1477]: time="2024-12-13T02:09:52.994712943Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.559207377s" Dec 13 02:09:52.995398 containerd[1477]: time="2024-12-13T02:09:52.995335126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 02:09:53.021388 containerd[1477]: time="2024-12-13T02:09:53.021346407Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 02:09:54.739642 containerd[1477]: time="2024-12-13T02:09:54.739569198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:54.740705 containerd[1477]: time="2024-12-13T02:09:54.740673036Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164352" Dec 13 02:09:54.741435 containerd[1477]: time="2024-12-13T02:09:54.741380300Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:54.744396 containerd[1477]: time="2024-12-13T02:09:54.744332882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:54.746588 containerd[1477]: time="2024-12-13T02:09:54.746087822Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.724459004s" Dec 13 02:09:54.746588 containerd[1477]: time="2024-12-13T02:09:54.746139464Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 02:09:54.768197 containerd[1477]: time="2024-12-13T02:09:54.768139178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:09:55.812760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323327661.mount: Deactivated successfully. Dec 13 02:09:56.465292 containerd[1477]: time="2024-12-13T02:09:56.465216837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:56.466861 containerd[1477]: time="2024-12-13T02:09:56.466434237Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662037" Dec 13 02:09:56.467984 containerd[1477]: time="2024-12-13T02:09:56.467936526Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:56.471766 containerd[1477]: time="2024-12-13T02:09:56.471672287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:56.472699 containerd[1477]: time="2024-12-13T02:09:56.472188424Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.703985644s" Dec 13 02:09:56.472699 containerd[1477]: time="2024-12-13T02:09:56.472223945Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 02:09:56.494208 containerd[1477]: time="2024-12-13T02:09:56.493802126Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:09:57.087007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2914183341.mount: Deactivated successfully. Dec 13 02:09:57.841120 containerd[1477]: time="2024-12-13T02:09:57.841073119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:57.842988 containerd[1477]: time="2024-12-13T02:09:57.842939898Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 02:09:57.844237 containerd[1477]: time="2024-12-13T02:09:57.844170297Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:57.847046 containerd[1477]: time="2024-12-13T02:09:57.846979426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:57.848577 containerd[1477]: time="2024-12-13T02:09:57.848140903Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.354273015s" Dec 13 02:09:57.848577 containerd[1477]: time="2024-12-13T02:09:57.848183264Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 02:09:57.871422 containerd[1477]: time="2024-12-13T02:09:57.871369757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:09:58.372027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424095270.mount: Deactivated successfully. Dec 13 02:09:58.379683 containerd[1477]: time="2024-12-13T02:09:58.379513356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:58.380701 containerd[1477]: time="2024-12-13T02:09:58.380661711Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Dec 13 02:09:58.381466 containerd[1477]: time="2024-12-13T02:09:58.381401694Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:58.383704 containerd[1477]: time="2024-12-13T02:09:58.383642363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:09:58.384647 containerd[1477]: time="2024-12-13T02:09:58.384509189Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 513.100551ms" Dec 13 02:09:58.384647 containerd[1477]: time="2024-12-13T02:09:58.384549591Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 02:09:58.409947 containerd[1477]: time="2024-12-13T02:09:58.409881891Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 02:09:59.027139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427104024.mount: Deactivated successfully. Dec 13 02:10:02.398023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Dec 13 02:10:02.404455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:02.529759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:02.534528 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 02:10:02.580006 kubelet[2254]: E1213 02:10:02.579940 2254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:10:02.583148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:10:02.583456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:10:03.546170 containerd[1477]: time="2024-12-13T02:10:03.546119743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:03.547563 containerd[1477]: time="2024-12-13T02:10:03.547530701Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Dec 13 02:10:03.548198 containerd[1477]: time="2024-12-13T02:10:03.547935272Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:03.551288 containerd[1477]: time="2024-12-13T02:10:03.551223920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:03.552664 containerd[1477]: time="2024-12-13T02:10:03.552508395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.142582583s" Dec 13 02:10:03.552664 containerd[1477]: time="2024-12-13T02:10:03.552549356Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 02:10:08.559889 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:08.569502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:08.599298 systemd[1]: Reloading requested from client PID 2332 ('systemctl') (unit session-7.scope)... Dec 13 02:10:08.599316 systemd[1]: Reloading... Dec 13 02:10:08.720208 zram_generator::config[2371]: No configuration found. Dec 13 02:10:08.815669 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:10:08.883695 systemd[1]: Reloading finished in 284 ms. Dec 13 02:10:08.947328 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:10:08.947458 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:10:08.948112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:08.954471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:09.057855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:09.067511 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:10:09.120490 kubelet[2420]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:09.122274 kubelet[2420]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:10:09.122274 kubelet[2420]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:09.122274 kubelet[2420]: I1213 02:10:09.120885 2420 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:10:10.294588 kubelet[2420]: I1213 02:10:10.294534 2420 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:10:10.294588 kubelet[2420]: I1213 02:10:10.294567 2420 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:10:10.295004 kubelet[2420]: I1213 02:10:10.294766 2420 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:10:10.318522 kubelet[2420]: I1213 02:10:10.318364 2420 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:10:10.319253 kubelet[2420]: E1213 02:10:10.319230 2420 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.47.95.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.329285 kubelet[2420]: I1213 02:10:10.329162 2420 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:10:10.333113 kubelet[2420]: I1213 02:10:10.332499 2420 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:10:10.333113 kubelet[2420]: I1213 02:10:10.332555 2420 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-6-b597ddf835","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:10:10.333113 kubelet[2420]: I1213 02:10:10.332914 2420 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:10:10.333113 kubelet[2420]: I1213 02:10:10.332924 2420 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:10:10.333371 kubelet[2420]: I1213 02:10:10.333157 2420 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:10.334733 kubelet[2420]: I1213 02:10:10.334694 2420 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:10:10.335017 kubelet[2420]: I1213 02:10:10.334992 2420 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:10:10.335365 kubelet[2420]: I1213 02:10:10.335345 2420 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:10:10.335495 kubelet[2420]: I1213 02:10:10.335475 2420 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:10:10.336140 kubelet[2420]: W1213 02:10:10.335610 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-b597ddf835&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.336140 kubelet[2420]: E1213 02:10:10.335686 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-b597ddf835&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.337179 kubelet[2420]: W1213 02:10:10.337113 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.337366 kubelet[2420]: E1213 02:10:10.337345 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.340096 kubelet[2420]: I1213 02:10:10.338151 2420 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:10:10.340096 kubelet[2420]: I1213 02:10:10.338633 2420 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:10:10.340096 kubelet[2420]: W1213 02:10:10.338761 2420 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:10:10.340394 kubelet[2420]: I1213 02:10:10.340112 2420 server.go:1264] "Started kubelet" Dec 13 02:10:10.347797 kubelet[2420]: I1213 02:10:10.347230 2420 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:10:10.347797 kubelet[2420]: E1213 02:10:10.347393 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.95.53:6443/api/v1/namespaces/default/events\": dial tcp 78.47.95.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-6-b597ddf835.18109a9274050e35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-6-b597ddf835,UID:ci-4081-2-1-6-b597ddf835,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-6-b597ddf835,},FirstTimestamp:2024-12-13 02:10:10.340032053 +0000 UTC m=+1.268065472,LastTimestamp:2024-12-13 02:10:10.340032053 +0000 UTC m=+1.268065472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-6-b597ddf835,}" Dec 13 02:10:10.352459 kubelet[2420]: I1213 02:10:10.352415 2420 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:10:10.353077 kubelet[2420]: I1213 02:10:10.353038 2420 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:10:10.353611 kubelet[2420]: I1213 02:10:10.353590 2420 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:10:10.354706 kubelet[2420]: I1213 02:10:10.354540 2420 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:10:10.354772 kubelet[2420]: I1213 02:10:10.354763 2420 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:10:10.355787 kubelet[2420]: E1213 02:10:10.355469 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-b597ddf835?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="200ms" Dec 13 02:10:10.357550 kubelet[2420]: I1213 02:10:10.357086 2420 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:10:10.357550 kubelet[2420]: I1213 02:10:10.357146 2420 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:10:10.357550 kubelet[2420]: W1213 02:10:10.357469 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.357550 kubelet[2420]: E1213 02:10:10.357511 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.358019 kubelet[2420]: I1213 02:10:10.357999 2420 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:10:10.358228 kubelet[2420]: I1213 02:10:10.358209 2420 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:10:10.359657 kubelet[2420]: E1213 02:10:10.359632 2420 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:10:10.359943 kubelet[2420]: I1213 02:10:10.359927 2420 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:10:10.365099 kubelet[2420]: I1213 02:10:10.363828 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:10:10.365099 kubelet[2420]: I1213 02:10:10.364764 2420 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:10:10.365099 kubelet[2420]: I1213 02:10:10.364915 2420 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:10:10.365099 kubelet[2420]: I1213 02:10:10.364932 2420 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:10:10.365099 kubelet[2420]: E1213 02:10:10.364968 2420 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:10:10.370880 kubelet[2420]: W1213 02:10:10.370808 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.370880 kubelet[2420]: E1213 02:10:10.370874 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:10.394711 kubelet[2420]: I1213 02:10:10.394687 2420 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:10:10.394711 kubelet[2420]: I1213 02:10:10.394706 2420 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:10:10.394868 kubelet[2420]: I1213 02:10:10.394728 2420 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:10.396428 kubelet[2420]: I1213 02:10:10.396402 2420 policy_none.go:49] "None policy: Start" Dec 13 02:10:10.397077 kubelet[2420]: I1213 02:10:10.397005 2420 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:10:10.397117 kubelet[2420]: I1213 02:10:10.397100 2420 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:10:10.404169 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 02:10:10.413321 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 02:10:10.418263 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 02:10:10.432240 kubelet[2420]: I1213 02:10:10.431700 2420 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:10:10.432240 kubelet[2420]: I1213 02:10:10.432080 2420 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:10:10.432437 kubelet[2420]: I1213 02:10:10.432210 2420 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:10:10.436211 kubelet[2420]: E1213 02:10:10.436165 2420 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-6-b597ddf835\" not found" Dec 13 02:10:10.455546 kubelet[2420]: I1213 02:10:10.455503 2420 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.457163 kubelet[2420]: E1213 02:10:10.457037 2420 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.465415 kubelet[2420]: I1213 02:10:10.465361 2420 topology_manager.go:215] "Topology Admit Handler" podUID="e7acecebd0e41400069a473591a1dd87" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.468109 kubelet[2420]: I1213 02:10:10.467833 2420 topology_manager.go:215] "Topology Admit Handler" podUID="4f1f8b657b8bea88944bcdf775a0cd79" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.470347 kubelet[2420]: I1213 02:10:10.470290 2420 topology_manager.go:215] "Topology Admit Handler" podUID="efde51e1488c8772763049f44e4b0e14" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.482110 systemd[1]: Created slice kubepods-burstable-pode7acecebd0e41400069a473591a1dd87.slice - libcontainer container kubepods-burstable-pode7acecebd0e41400069a473591a1dd87.slice. Dec 13 02:10:10.493457 systemd[1]: Created slice kubepods-burstable-pod4f1f8b657b8bea88944bcdf775a0cd79.slice - libcontainer container kubepods-burstable-pod4f1f8b657b8bea88944bcdf775a0cd79.slice. Dec 13 02:10:10.509601 systemd[1]: Created slice kubepods-burstable-podefde51e1488c8772763049f44e4b0e14.slice - libcontainer container kubepods-burstable-podefde51e1488c8772763049f44e4b0e14.slice. Dec 13 02:10:10.556934 kubelet[2420]: E1213 02:10:10.556752 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-b597ddf835?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="400ms" Dec 13 02:10:10.658693 kubelet[2420]: I1213 02:10:10.658251 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658693 kubelet[2420]: I1213 02:10:10.658438 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658693 kubelet[2420]: I1213 02:10:10.658541 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658693 kubelet[2420]: I1213 02:10:10.658602 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efde51e1488c8772763049f44e4b0e14-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-6-b597ddf835\" (UID: \"efde51e1488c8772763049f44e4b0e14\") " pod="kube-system/kube-scheduler-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658693 kubelet[2420]: I1213 02:10:10.658637 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7acecebd0e41400069a473591a1dd87-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" (UID: \"e7acecebd0e41400069a473591a1dd87\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658937 kubelet[2420]: I1213 02:10:10.658686 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658937 kubelet[2420]: I1213 02:10:10.658720 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658937 kubelet[2420]: I1213 02:10:10.658752 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7acecebd0e41400069a473591a1dd87-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" (UID: \"e7acecebd0e41400069a473591a1dd87\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.658937 kubelet[2420]: I1213 02:10:10.658785 2420 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7acecebd0e41400069a473591a1dd87-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" (UID: \"e7acecebd0e41400069a473591a1dd87\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.660648 kubelet[2420]: I1213 02:10:10.660609 2420 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.661147 kubelet[2420]: E1213 02:10:10.661111 2420 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:10.793211 containerd[1477]: time="2024-12-13T02:10:10.793044067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-6-b597ddf835,Uid:e7acecebd0e41400069a473591a1dd87,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:10.806008 containerd[1477]: time="2024-12-13T02:10:10.805872318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-6-b597ddf835,Uid:4f1f8b657b8bea88944bcdf775a0cd79,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:10.815406 containerd[1477]: time="2024-12-13T02:10:10.814942403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-6-b597ddf835,Uid:efde51e1488c8772763049f44e4b0e14,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:10.958035 kubelet[2420]: E1213 02:10:10.957974 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-b597ddf835?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="800ms" Dec 13 02:10:11.064146 kubelet[2420]: I1213 02:10:11.064105 2420 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:11.064605 kubelet[2420]: E1213 02:10:11.064562 2420 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:11.252987 kubelet[2420]: W1213 02:10:11.252887 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:11.252987 kubelet[2420]: E1213 02:10:11.252996 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.47.95.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:11.334427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955224796.mount: Deactivated successfully. Dec 13 02:10:11.341125 containerd[1477]: time="2024-12-13T02:10:11.340587555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.341808 containerd[1477]: time="2024-12-13T02:10:11.341763141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 02:10:11.344822 containerd[1477]: time="2024-12-13T02:10:11.344632285Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.345780 containerd[1477]: time="2024-12-13T02:10:11.345742429Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.346636 containerd[1477]: time="2024-12-13T02:10:11.346600848Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:10:11.348024 containerd[1477]: time="2024-12-13T02:10:11.347984999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.349014 containerd[1477]: time="2024-12-13T02:10:11.348826337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 02:10:11.350083 containerd[1477]: time="2024-12-13T02:10:11.349985203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 02:10:11.351399 containerd[1477]: time="2024-12-13T02:10:11.351157309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.113584ms" Dec 13 02:10:11.352262 containerd[1477]: time="2024-12-13T02:10:11.352215612Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.020621ms" Dec 13 02:10:11.375785 containerd[1477]: time="2024-12-13T02:10:11.375390844Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.402484ms" Dec 13 02:10:11.496075 kubelet[2420]: W1213 02:10:11.495897 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:11.496075 kubelet[2420]: E1213 02:10:11.496082 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.47.95.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:11.497263 containerd[1477]: time="2024-12-13T02:10:11.497002490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:11.497401 containerd[1477]: time="2024-12-13T02:10:11.497219175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:11.497401 containerd[1477]: time="2024-12-13T02:10:11.497338898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.497489 containerd[1477]: time="2024-12-13T02:10:11.494930044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:11.497489 containerd[1477]: time="2024-12-13T02:10:11.497283936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:11.497489 containerd[1477]: time="2024-12-13T02:10:11.497393459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.497786 containerd[1477]: time="2024-12-13T02:10:11.497743146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.497864 containerd[1477]: time="2024-12-13T02:10:11.497811588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.499028 containerd[1477]: time="2024-12-13T02:10:11.498728128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:11.499028 containerd[1477]: time="2024-12-13T02:10:11.498799850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:11.499028 containerd[1477]: time="2024-12-13T02:10:11.498817930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.499028 containerd[1477]: time="2024-12-13T02:10:11.498945853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:11.513977 kubelet[2420]: E1213 02:10:11.513517 2420 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://78.47.95.53:6443/api/v1/namespaces/default/events\": dial tcp 78.47.95.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-6-b597ddf835.18109a9274050e35 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-6-b597ddf835,UID:ci-4081-2-1-6-b597ddf835,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-6-b597ddf835,},FirstTimestamp:2024-12-13 02:10:10.340032053 +0000 UTC m=+1.268065472,LastTimestamp:2024-12-13 02:10:10.340032053 +0000 UTC m=+1.268065472,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-6-b597ddf835,}" Dec 13 02:10:11.521307 systemd[1]: Started cri-containerd-3badae0f70c8dc8150fdd2355dae742f32ea33d20abfac97f9e5fb80a9079f5d.scope - libcontainer container 3badae0f70c8dc8150fdd2355dae742f32ea33d20abfac97f9e5fb80a9079f5d. Dec 13 02:10:11.524914 systemd[1]: Started cri-containerd-95dd8bb7471e66ec51f43a91edd8b5ea05256fad5a0e5b503796ec49e9b870b8.scope - libcontainer container 95dd8bb7471e66ec51f43a91edd8b5ea05256fad5a0e5b503796ec49e9b870b8. Dec 13 02:10:11.539261 systemd[1]: Started cri-containerd-20e6aafef9f6c66e1661e017f19a5e8ac4974c59d319426fc49eebfe6d5649cc.scope - libcontainer container 20e6aafef9f6c66e1661e017f19a5e8ac4974c59d319426fc49eebfe6d5649cc. Dec 13 02:10:11.582005 containerd[1477]: time="2024-12-13T02:10:11.581654560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-6-b597ddf835,Uid:4f1f8b657b8bea88944bcdf775a0cd79,Namespace:kube-system,Attempt:0,} returns sandbox id \"95dd8bb7471e66ec51f43a91edd8b5ea05256fad5a0e5b503796ec49e9b870b8\"" Dec 13 02:10:11.592534 containerd[1477]: time="2024-12-13T02:10:11.592389197Z" level=info msg="CreateContainer within sandbox \"95dd8bb7471e66ec51f43a91edd8b5ea05256fad5a0e5b503796ec49e9b870b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:10:11.595341 containerd[1477]: time="2024-12-13T02:10:11.595237500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-6-b597ddf835,Uid:efde51e1488c8772763049f44e4b0e14,Namespace:kube-system,Attempt:0,} returns sandbox id \"3badae0f70c8dc8150fdd2355dae742f32ea33d20abfac97f9e5fb80a9079f5d\"" Dec 13 02:10:11.599842 containerd[1477]: time="2024-12-13T02:10:11.599707559Z" level=info msg="CreateContainer within sandbox \"3badae0f70c8dc8150fdd2355dae742f32ea33d20abfac97f9e5fb80a9079f5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:10:11.609007 containerd[1477]: time="2024-12-13T02:10:11.608965243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-6-b597ddf835,Uid:e7acecebd0e41400069a473591a1dd87,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e6aafef9f6c66e1661e017f19a5e8ac4974c59d319426fc49eebfe6d5649cc\"" Dec 13 02:10:11.615509 containerd[1477]: time="2024-12-13T02:10:11.615458626Z" level=info msg="CreateContainer within sandbox \"20e6aafef9f6c66e1661e017f19a5e8ac4974c59d319426fc49eebfe6d5649cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:10:11.626475 containerd[1477]: time="2024-12-13T02:10:11.626304586Z" level=info msg="CreateContainer within sandbox \"95dd8bb7471e66ec51f43a91edd8b5ea05256fad5a0e5b503796ec49e9b870b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11\"" Dec 13 02:10:11.627570 containerd[1477]: time="2024-12-13T02:10:11.627377370Z" level=info msg="StartContainer for \"d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11\"" Dec 13 02:10:11.631159 containerd[1477]: time="2024-12-13T02:10:11.630977529Z" level=info msg="CreateContainer within sandbox \"3badae0f70c8dc8150fdd2355dae742f32ea33d20abfac97f9e5fb80a9079f5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a\"" Dec 13 02:10:11.632041 containerd[1477]: time="2024-12-13T02:10:11.631851148Z" level=info msg="StartContainer for \"2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a\"" Dec 13 02:10:11.643830 containerd[1477]: time="2024-12-13T02:10:11.643736131Z" level=info msg="CreateContainer within sandbox \"20e6aafef9f6c66e1661e017f19a5e8ac4974c59d319426fc49eebfe6d5649cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e07246e07df2941da37eecbd431a6e20a2fe39cc24711311dc2b28916999a564\"" Dec 13 02:10:11.645090 containerd[1477]: time="2024-12-13T02:10:11.644582630Z" level=info msg="StartContainer for \"e07246e07df2941da37eecbd431a6e20a2fe39cc24711311dc2b28916999a564\"" Dec 13 02:10:11.665320 systemd[1]: Started cri-containerd-d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11.scope - libcontainer container d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11. Dec 13 02:10:11.692249 systemd[1]: Started cri-containerd-2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a.scope - libcontainer container 2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a. Dec 13 02:10:11.704314 systemd[1]: Started cri-containerd-e07246e07df2941da37eecbd431a6e20a2fe39cc24711311dc2b28916999a564.scope - libcontainer container e07246e07df2941da37eecbd431a6e20a2fe39cc24711311dc2b28916999a564. Dec 13 02:10:11.728243 containerd[1477]: time="2024-12-13T02:10:11.727679905Z" level=info msg="StartContainer for \"d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11\" returns successfully" Dec 13 02:10:11.748457 containerd[1477]: time="2024-12-13T02:10:11.748413283Z" level=info msg="StartContainer for \"e07246e07df2941da37eecbd431a6e20a2fe39cc24711311dc2b28916999a564\" returns successfully" Dec 13 02:10:11.760376 kubelet[2420]: E1213 02:10:11.760314 2420 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.47.95.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-6-b597ddf835?timeout=10s\": dial tcp 78.47.95.53:6443: connect: connection refused" interval="1.6s" Dec 13 02:10:11.770809 kubelet[2420]: W1213 02:10:11.770563 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-b597ddf835&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:11.770809 kubelet[2420]: E1213 02:10:11.770628 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.47.95.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-6-b597ddf835&limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:11.790344 containerd[1477]: time="2024-12-13T02:10:11.790290888Z" level=info msg="StartContainer for \"2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a\" returns successfully" Dec 13 02:10:11.867279 kubelet[2420]: I1213 02:10:11.867243 2420 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:11.867678 kubelet[2420]: E1213 02:10:11.867641 2420 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.47.95.53:6443/api/v1/nodes\": dial tcp 78.47.95.53:6443: connect: connection refused" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:11.902508 kubelet[2420]: W1213 02:10:11.902403 2420 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:11.902508 kubelet[2420]: E1213 02:10:11.902472 2420 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.47.95.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.47.95.53:6443: connect: connection refused Dec 13 02:10:13.470095 kubelet[2420]: I1213 02:10:13.469542 2420 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:14.574629 kubelet[2420]: E1213 02:10:14.574577 2420 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-6-b597ddf835\" not found" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:14.634297 kubelet[2420]: I1213 02:10:14.633983 2420 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:15.002096 kubelet[2420]: E1213 02:10:15.002042 2420 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-6-b597ddf835\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:15.342210 kubelet[2420]: I1213 02:10:15.339946 2420 apiserver.go:52] "Watching apiserver" Dec 13 02:10:15.357749 kubelet[2420]: I1213 02:10:15.357691 2420 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:10:17.040034 systemd[1]: Reloading requested from client PID 2691 ('systemctl') (unit session-7.scope)... Dec 13 02:10:17.040373 systemd[1]: Reloading... Dec 13 02:10:17.145241 zram_generator::config[2734]: No configuration found. Dec 13 02:10:17.242376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:10:17.321969 systemd[1]: Reloading finished in 281 ms. Dec 13 02:10:17.354746 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:17.355535 kubelet[2420]: I1213 02:10:17.355300 2420 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:10:17.367365 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:10:17.367658 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:17.367733 systemd[1]: kubelet.service: Consumed 1.703s CPU time, 110.0M memory peak, 0B memory swap peak. Dec 13 02:10:17.374742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 02:10:17.509168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 02:10:17.516273 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 02:10:17.570375 kubelet[2776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:17.570713 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:10:17.570757 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:10:17.570881 kubelet[2776]: I1213 02:10:17.570846 2776 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:10:17.577420 kubelet[2776]: I1213 02:10:17.576883 2776 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:10:17.577420 kubelet[2776]: I1213 02:10:17.577350 2776 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:10:17.577589 kubelet[2776]: I1213 02:10:17.577571 2776 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:10:17.579189 kubelet[2776]: I1213 02:10:17.579158 2776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:10:17.581011 kubelet[2776]: I1213 02:10:17.580880 2776 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:10:17.588674 kubelet[2776]: I1213 02:10:17.588639 2776 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:10:17.588876 kubelet[2776]: I1213 02:10:17.588818 2776 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:10:17.589033 kubelet[2776]: I1213 02:10:17.588844 2776 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-6-b597ddf835","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:10:17.589033 kubelet[2776]: I1213 02:10:17.589020 2776 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:10:17.589033 kubelet[2776]: I1213 02:10:17.589029 2776 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:10:17.590524 kubelet[2776]: I1213 02:10:17.589102 2776 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:17.590524 kubelet[2776]: I1213 02:10:17.589213 2776 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:10:17.590524 kubelet[2776]: I1213 02:10:17.589228 2776 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:10:17.590524 kubelet[2776]: I1213 02:10:17.589272 2776 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:10:17.590524 kubelet[2776]: I1213 02:10:17.589290 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:10:17.591615 kubelet[2776]: I1213 02:10:17.591555 2776 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 02:10:17.592143 kubelet[2776]: I1213 02:10:17.592115 2776 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:10:17.593788 kubelet[2776]: I1213 02:10:17.593038 2776 server.go:1264] "Started kubelet" Dec 13 02:10:17.595617 kubelet[2776]: I1213 02:10:17.595559 2776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:10:17.597127 kubelet[2776]: I1213 02:10:17.596147 2776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:10:17.599090 kubelet[2776]: I1213 02:10:17.597632 2776 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:10:17.600807 kubelet[2776]: I1213 02:10:17.598220 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:10:17.602092 kubelet[2776]: I1213 02:10:17.601503 2776 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:10:17.602092 kubelet[2776]: I1213 02:10:17.596676 2776 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:10:17.605211 kubelet[2776]: I1213 02:10:17.604326 2776 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:10:17.605211 kubelet[2776]: I1213 02:10:17.604465 2776 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:10:17.623833 kubelet[2776]: I1213 02:10:17.623795 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:10:17.625008 kubelet[2776]: I1213 02:10:17.624984 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:10:17.625168 kubelet[2776]: I1213 02:10:17.625156 2776 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:10:17.625241 kubelet[2776]: I1213 02:10:17.625233 2776 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:10:17.625354 kubelet[2776]: E1213 02:10:17.625336 2776 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:10:17.641916 kubelet[2776]: I1213 02:10:17.640142 2776 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:10:17.641916 kubelet[2776]: I1213 02:10:17.640235 2776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:10:17.642109 kubelet[2776]: I1213 02:10:17.642014 2776 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:10:17.686731 kubelet[2776]: I1213 02:10:17.686698 2776 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:10:17.686731 kubelet[2776]: I1213 02:10:17.686722 2776 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:10:17.686731 kubelet[2776]: I1213 02:10:17.686745 2776 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:10:17.687035 kubelet[2776]: I1213 02:10:17.686937 2776 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:10:17.687035 kubelet[2776]: I1213 02:10:17.686974 2776 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:10:17.687035 kubelet[2776]: I1213 02:10:17.687000 2776 policy_none.go:49] "None policy: Start" Dec 13 02:10:17.688402 kubelet[2776]: I1213 02:10:17.687740 2776 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:10:17.688402 kubelet[2776]: I1213 02:10:17.687765 2776 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:10:17.688402 kubelet[2776]: I1213 02:10:17.687900 2776 state_mem.go:75] "Updated machine memory state" Dec 13 02:10:17.693213 kubelet[2776]: I1213 02:10:17.693188 2776 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:10:17.694176 kubelet[2776]: I1213 02:10:17.693701 2776 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:10:17.694176 kubelet[2776]: I1213 02:10:17.693820 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:10:17.714766 kubelet[2776]: I1213 02:10:17.714630 2776 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.723378 kubelet[2776]: I1213 02:10:17.723335 2776 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.725182 kubelet[2776]: I1213 02:10:17.723434 2776 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.725873 kubelet[2776]: I1213 02:10:17.725723 2776 topology_manager.go:215] "Topology Admit Handler" podUID="e7acecebd0e41400069a473591a1dd87" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.725988 kubelet[2776]: I1213 02:10:17.725937 2776 topology_manager.go:215] "Topology Admit Handler" podUID="4f1f8b657b8bea88944bcdf775a0cd79" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.726078 kubelet[2776]: I1213 02:10:17.725989 2776 topology_manager.go:215] "Topology Admit Handler" podUID="efde51e1488c8772763049f44e4b0e14" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.740191 kubelet[2776]: E1213 02:10:17.740141 2776 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.740297 kubelet[2776]: E1213 02:10:17.740277 2776 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806394 kubelet[2776]: I1213 02:10:17.806283 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806394 kubelet[2776]: I1213 02:10:17.806365 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806394 kubelet[2776]: I1213 02:10:17.806418 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806864 kubelet[2776]: I1213 02:10:17.806497 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e7acecebd0e41400069a473591a1dd87-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" (UID: \"e7acecebd0e41400069a473591a1dd87\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806864 kubelet[2776]: I1213 02:10:17.806569 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e7acecebd0e41400069a473591a1dd87-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" (UID: \"e7acecebd0e41400069a473591a1dd87\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806864 kubelet[2776]: I1213 02:10:17.806626 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e7acecebd0e41400069a473591a1dd87-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" (UID: \"e7acecebd0e41400069a473591a1dd87\") " pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806864 kubelet[2776]: I1213 02:10:17.806665 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.806864 kubelet[2776]: I1213 02:10:17.806707 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f1f8b657b8bea88944bcdf775a0cd79-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-6-b597ddf835\" (UID: \"4f1f8b657b8bea88944bcdf775a0cd79\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:17.807181 kubelet[2776]: I1213 02:10:17.806745 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/efde51e1488c8772763049f44e4b0e14-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-6-b597ddf835\" (UID: \"efde51e1488c8772763049f44e4b0e14\") " pod="kube-system/kube-scheduler-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:18.591504 kubelet[2776]: I1213 02:10:18.590082 2776 apiserver.go:52] "Watching apiserver" Dec 13 02:10:18.605567 kubelet[2776]: I1213 02:10:18.605459 2776 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:10:18.693705 kubelet[2776]: E1213 02:10:18.693338 2776 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-6-b597ddf835\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" Dec 13 02:10:18.761737 kubelet[2776]: I1213 02:10:18.761673 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-6-b597ddf835" podStartSLOduration=1.761655306 podStartE2EDuration="1.761655306s" podCreationTimestamp="2024-12-13 02:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:18.714109657 +0000 UTC m=+1.193151235" watchObservedRunningTime="2024-12-13 02:10:18.761655306 +0000 UTC m=+1.240696884" Dec 13 02:10:18.805538 kubelet[2776]: I1213 02:10:18.805386 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-6-b597ddf835" podStartSLOduration=1.8053654030000001 podStartE2EDuration="1.805365403s" podCreationTimestamp="2024-12-13 02:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:18.762259797 +0000 UTC m=+1.241301375" watchObservedRunningTime="2024-12-13 02:10:18.805365403 +0000 UTC m=+1.284406981" Dec 13 02:10:18.832413 kubelet[2776]: I1213 02:10:18.832284 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-6-b597ddf835" podStartSLOduration=3.832264345 podStartE2EDuration="3.832264345s" podCreationTimestamp="2024-12-13 02:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:18.806674827 +0000 UTC m=+1.285716405" watchObservedRunningTime="2024-12-13 02:10:18.832264345 +0000 UTC m=+1.311305883" Dec 13 02:10:22.875631 sudo[1884]: pam_unix(sudo:session): session closed for user root Dec 13 02:10:23.036092 sshd[1881]: pam_unix(sshd:session): session closed for user core Dec 13 02:10:23.043428 systemd[1]: sshd@7-78.47.95.53:22-147.75.109.163:49728.service: Deactivated successfully. Dec 13 02:10:23.045524 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:10:23.045741 systemd[1]: session-7.scope: Consumed 6.901s CPU time, 189.0M memory peak, 0B memory swap peak. Dec 13 02:10:23.046516 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:10:23.048037 systemd-logind[1454]: Removed session 7. Dec 13 02:10:24.903201 update_engine[1455]: I20241213 02:10:24.903105 1455 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 02:10:24.903201 update_engine[1455]: I20241213 02:10:24.903163 1455 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 02:10:24.903901 update_engine[1455]: I20241213 02:10:24.903394 1455 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 02:10:24.903901 update_engine[1455]: I20241213 02:10:24.903786 1455 omaha_request_params.cc:62] Current group set to stable Dec 13 02:10:24.903901 update_engine[1455]: I20241213 02:10:24.903892 1455 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 02:10:24.903901 update_engine[1455]: I20241213 02:10:24.903903 1455 update_attempter.cc:643] Scheduling an action processor start. Dec 13 02:10:24.904118 update_engine[1455]: I20241213 02:10:24.903919 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:10:24.904118 update_engine[1455]: I20241213 02:10:24.903949 1455 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 02:10:24.904118 update_engine[1455]: I20241213 02:10:24.904002 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 02:10:24.904118 update_engine[1455]: I20241213 02:10:24.904011 1455 omaha_request_action.cc:272] Request: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: Dec 13 02:10:24.904118 update_engine[1455]: I20241213 02:10:24.904017 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:24.905256 locksmithd[1496]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 02:10:24.905877 update_engine[1455]: I20241213 02:10:24.905822 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:24.906516 update_engine[1455]: I20241213 02:10:24.906426 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:24.907371 update_engine[1455]: E20241213 02:10:24.907323 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:24.907481 update_engine[1455]: I20241213 02:10:24.907418 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 02:10:31.616586 kubelet[2776]: I1213 02:10:31.616547 2776 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:10:31.616961 containerd[1477]: time="2024-12-13T02:10:31.616886605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:10:31.617338 kubelet[2776]: I1213 02:10:31.617310 2776 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:10:32.482164 kubelet[2776]: I1213 02:10:32.482098 2776 topology_manager.go:215] "Topology Admit Handler" podUID="9f20b879-0d1c-4179-a665-d74d3706432c" podNamespace="kube-system" podName="kube-proxy-pvqmr" Dec 13 02:10:32.494274 systemd[1]: Created slice kubepods-besteffort-pod9f20b879_0d1c_4179_a665_d74d3706432c.slice - libcontainer container kubepods-besteffort-pod9f20b879_0d1c_4179_a665_d74d3706432c.slice. Dec 13 02:10:32.498536 kubelet[2776]: I1213 02:10:32.498381 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f20b879-0d1c-4179-a665-d74d3706432c-xtables-lock\") pod \"kube-proxy-pvqmr\" (UID: \"9f20b879-0d1c-4179-a665-d74d3706432c\") " pod="kube-system/kube-proxy-pvqmr" Dec 13 02:10:32.498536 kubelet[2776]: I1213 02:10:32.498416 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f20b879-0d1c-4179-a665-d74d3706432c-lib-modules\") pod \"kube-proxy-pvqmr\" (UID: \"9f20b879-0d1c-4179-a665-d74d3706432c\") " pod="kube-system/kube-proxy-pvqmr" Dec 13 02:10:32.498536 kubelet[2776]: I1213 02:10:32.498441 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztzgh\" (UniqueName: \"kubernetes.io/projected/9f20b879-0d1c-4179-a665-d74d3706432c-kube-api-access-ztzgh\") pod \"kube-proxy-pvqmr\" (UID: \"9f20b879-0d1c-4179-a665-d74d3706432c\") " pod="kube-system/kube-proxy-pvqmr" Dec 13 02:10:32.498536 kubelet[2776]: I1213 02:10:32.498473 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f20b879-0d1c-4179-a665-d74d3706432c-kube-proxy\") pod \"kube-proxy-pvqmr\" (UID: \"9f20b879-0d1c-4179-a665-d74d3706432c\") " pod="kube-system/kube-proxy-pvqmr" Dec 13 02:10:32.760922 kubelet[2776]: I1213 02:10:32.760751 2776 topology_manager.go:215] "Topology Admit Handler" podUID="03671b8d-753c-4450-a3db-86e39179fd19" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-hdvvm" Dec 13 02:10:32.783006 systemd[1]: Created slice kubepods-besteffort-pod03671b8d_753c_4450_a3db_86e39179fd19.slice - libcontainer container kubepods-besteffort-pod03671b8d_753c_4450_a3db_86e39179fd19.slice. Dec 13 02:10:32.800755 kubelet[2776]: I1213 02:10:32.800704 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd5zw\" (UniqueName: \"kubernetes.io/projected/03671b8d-753c-4450-a3db-86e39179fd19-kube-api-access-fd5zw\") pod \"tigera-operator-7bc55997bb-hdvvm\" (UID: \"03671b8d-753c-4450-a3db-86e39179fd19\") " pod="tigera-operator/tigera-operator-7bc55997bb-hdvvm" Dec 13 02:10:32.800755 kubelet[2776]: I1213 02:10:32.800823 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/03671b8d-753c-4450-a3db-86e39179fd19-var-lib-calico\") pod \"tigera-operator-7bc55997bb-hdvvm\" (UID: \"03671b8d-753c-4450-a3db-86e39179fd19\") " pod="tigera-operator/tigera-operator-7bc55997bb-hdvvm" Dec 13 02:10:32.801869 containerd[1477]: time="2024-12-13T02:10:32.801825504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvqmr,Uid:9f20b879-0d1c-4179-a665-d74d3706432c,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:32.829276 containerd[1477]: time="2024-12-13T02:10:32.829098920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:32.829276 containerd[1477]: time="2024-12-13T02:10:32.829173601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:32.829276 containerd[1477]: time="2024-12-13T02:10:32.829190681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:32.829983 containerd[1477]: time="2024-12-13T02:10:32.829757849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:32.851264 systemd[1]: Started cri-containerd-f403bd385083707fcc2e160ffa071651b7073c4088229d6acdfdbb6c9b712141.scope - libcontainer container f403bd385083707fcc2e160ffa071651b7073c4088229d6acdfdbb6c9b712141. Dec 13 02:10:32.876794 containerd[1477]: time="2024-12-13T02:10:32.876752177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvqmr,Uid:9f20b879-0d1c-4179-a665-d74d3706432c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f403bd385083707fcc2e160ffa071651b7073c4088229d6acdfdbb6c9b712141\"" Dec 13 02:10:32.881873 containerd[1477]: time="2024-12-13T02:10:32.881832408Z" level=info msg="CreateContainer within sandbox \"f403bd385083707fcc2e160ffa071651b7073c4088229d6acdfdbb6c9b712141\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:10:32.901876 containerd[1477]: time="2024-12-13T02:10:32.901763403Z" level=info msg="CreateContainer within sandbox \"f403bd385083707fcc2e160ffa071651b7073c4088229d6acdfdbb6c9b712141\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a8c370bbfcd1041f9b711985ddbd6ff662aeb604c08ceb065404fd4fa22c0298\"" Dec 13 02:10:32.903022 containerd[1477]: time="2024-12-13T02:10:32.902631494Z" level=info msg="StartContainer for \"a8c370bbfcd1041f9b711985ddbd6ff662aeb604c08ceb065404fd4fa22c0298\"" Dec 13 02:10:32.935352 systemd[1]: Started cri-containerd-a8c370bbfcd1041f9b711985ddbd6ff662aeb604c08ceb065404fd4fa22c0298.scope - libcontainer container a8c370bbfcd1041f9b711985ddbd6ff662aeb604c08ceb065404fd4fa22c0298. Dec 13 02:10:32.974795 containerd[1477]: time="2024-12-13T02:10:32.974745970Z" level=info msg="StartContainer for \"a8c370bbfcd1041f9b711985ddbd6ff662aeb604c08ceb065404fd4fa22c0298\" returns successfully" Dec 13 02:10:33.092301 containerd[1477]: time="2024-12-13T02:10:33.091745759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-hdvvm,Uid:03671b8d-753c-4450-a3db-86e39179fd19,Namespace:tigera-operator,Attempt:0,}" Dec 13 02:10:33.125034 containerd[1477]: time="2024-12-13T02:10:33.124205358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:33.125034 containerd[1477]: time="2024-12-13T02:10:33.124690965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:33.125034 containerd[1477]: time="2024-12-13T02:10:33.124705645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.125034 containerd[1477]: time="2024-12-13T02:10:33.124810446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:33.149293 systemd[1]: Started cri-containerd-9937092571709b39883fa5729e03c5b72e7ea2eca75d5a52c53d0017497f9281.scope - libcontainer container 9937092571709b39883fa5729e03c5b72e7ea2eca75d5a52c53d0017497f9281. Dec 13 02:10:33.185590 containerd[1477]: time="2024-12-13T02:10:33.185177983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-hdvvm,Uid:03671b8d-753c-4450-a3db-86e39179fd19,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9937092571709b39883fa5729e03c5b72e7ea2eca75d5a52c53d0017497f9281\"" Dec 13 02:10:33.188736 containerd[1477]: time="2024-12-13T02:10:33.188380826Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 02:10:33.723120 kubelet[2776]: I1213 02:10:33.722879 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvqmr" podStartSLOduration=1.722861055 podStartE2EDuration="1.722861055s" podCreationTimestamp="2024-12-13 02:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:10:33.722775614 +0000 UTC m=+16.201817192" watchObservedRunningTime="2024-12-13 02:10:33.722861055 +0000 UTC m=+16.201902633" Dec 13 02:10:34.904231 update_engine[1455]: I20241213 02:10:34.903368 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:34.904231 update_engine[1455]: I20241213 02:10:34.903728 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:34.904231 update_engine[1455]: I20241213 02:10:34.903991 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:34.905008 update_engine[1455]: E20241213 02:10:34.904723 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:34.905008 update_engine[1455]: I20241213 02:10:34.904796 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 02:10:35.480520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3224521314.mount: Deactivated successfully. Dec 13 02:10:37.168472 containerd[1477]: time="2024-12-13T02:10:37.168404213Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:37.169399 containerd[1477]: time="2024-12-13T02:10:37.169360505Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125964" Dec 13 02:10:37.170103 containerd[1477]: time="2024-12-13T02:10:37.170028593Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:37.172949 containerd[1477]: time="2024-12-13T02:10:37.172562705Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:37.173479 containerd[1477]: time="2024-12-13T02:10:37.173446636Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 3.985007569s" Dec 13 02:10:37.173537 containerd[1477]: time="2024-12-13T02:10:37.173480596Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 02:10:37.177729 containerd[1477]: time="2024-12-13T02:10:37.177680409Z" level=info msg="CreateContainer within sandbox \"9937092571709b39883fa5729e03c5b72e7ea2eca75d5a52c53d0017497f9281\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 02:10:37.190631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286789969.mount: Deactivated successfully. Dec 13 02:10:37.197553 containerd[1477]: time="2024-12-13T02:10:37.197497217Z" level=info msg="CreateContainer within sandbox \"9937092571709b39883fa5729e03c5b72e7ea2eca75d5a52c53d0017497f9281\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236\"" Dec 13 02:10:37.199282 containerd[1477]: time="2024-12-13T02:10:37.198307947Z" level=info msg="StartContainer for \"eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236\"" Dec 13 02:10:37.230291 systemd[1]: Started cri-containerd-eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236.scope - libcontainer container eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236. Dec 13 02:10:37.267887 containerd[1477]: time="2024-12-13T02:10:37.267845177Z" level=info msg="StartContainer for \"eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236\" returns successfully" Dec 13 02:10:41.202308 kubelet[2776]: I1213 02:10:41.201740 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-hdvvm" podStartSLOduration=5.214164419 podStartE2EDuration="9.20167034s" podCreationTimestamp="2024-12-13 02:10:32 +0000 UTC" firstStartedPulling="2024-12-13 02:10:33.187690257 +0000 UTC m=+15.666731835" lastFinishedPulling="2024-12-13 02:10:37.175196178 +0000 UTC m=+19.654237756" observedRunningTime="2024-12-13 02:10:37.73710173 +0000 UTC m=+20.216143308" watchObservedRunningTime="2024-12-13 02:10:41.20167034 +0000 UTC m=+23.680711918" Dec 13 02:10:41.203384 kubelet[2776]: I1213 02:10:41.202898 2776 topology_manager.go:215] "Topology Admit Handler" podUID="f273f1e3-57e6-4246-baaf-960d3e12e9b1" podNamespace="calico-system" podName="calico-typha-797b989b68-lsz8x" Dec 13 02:10:41.214346 systemd[1]: Created slice kubepods-besteffort-podf273f1e3_57e6_4246_baaf_960d3e12e9b1.slice - libcontainer container kubepods-besteffort-podf273f1e3_57e6_4246_baaf_960d3e12e9b1.slice. Dec 13 02:10:41.263186 kubelet[2776]: I1213 02:10:41.262873 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkwz\" (UniqueName: \"kubernetes.io/projected/f273f1e3-57e6-4246-baaf-960d3e12e9b1-kube-api-access-ppkwz\") pod \"calico-typha-797b989b68-lsz8x\" (UID: \"f273f1e3-57e6-4246-baaf-960d3e12e9b1\") " pod="calico-system/calico-typha-797b989b68-lsz8x" Dec 13 02:10:41.263186 kubelet[2776]: I1213 02:10:41.262963 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f273f1e3-57e6-4246-baaf-960d3e12e9b1-typha-certs\") pod \"calico-typha-797b989b68-lsz8x\" (UID: \"f273f1e3-57e6-4246-baaf-960d3e12e9b1\") " pod="calico-system/calico-typha-797b989b68-lsz8x" Dec 13 02:10:41.263186 kubelet[2776]: I1213 02:10:41.263114 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f273f1e3-57e6-4246-baaf-960d3e12e9b1-tigera-ca-bundle\") pod \"calico-typha-797b989b68-lsz8x\" (UID: \"f273f1e3-57e6-4246-baaf-960d3e12e9b1\") " pod="calico-system/calico-typha-797b989b68-lsz8x" Dec 13 02:10:41.298541 kubelet[2776]: I1213 02:10:41.298438 2776 topology_manager.go:215] "Topology Admit Handler" podUID="54a2602e-3498-4e07-85b6-283e159dff27" podNamespace="calico-system" podName="calico-node-7z8r4" Dec 13 02:10:41.306836 systemd[1]: Created slice kubepods-besteffort-pod54a2602e_3498_4e07_85b6_283e159dff27.slice - libcontainer container kubepods-besteffort-pod54a2602e_3498_4e07_85b6_283e159dff27.slice. Dec 13 02:10:41.364354 kubelet[2776]: I1213 02:10:41.364306 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-lib-modules\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364354 kubelet[2776]: I1213 02:10:41.364347 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-var-run-calico\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364654 kubelet[2776]: I1213 02:10:41.364376 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-cni-log-dir\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364654 kubelet[2776]: I1213 02:10:41.364424 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-cni-bin-dir\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364654 kubelet[2776]: I1213 02:10:41.364447 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-xtables-lock\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364654 kubelet[2776]: I1213 02:10:41.364470 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-policysync\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364654 kubelet[2776]: I1213 02:10:41.364488 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54a2602e-3498-4e07-85b6-283e159dff27-tigera-ca-bundle\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364944 kubelet[2776]: I1213 02:10:41.364506 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/54a2602e-3498-4e07-85b6-283e159dff27-node-certs\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364944 kubelet[2776]: I1213 02:10:41.364524 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-flexvol-driver-host\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364944 kubelet[2776]: I1213 02:10:41.364544 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-cni-net-dir\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364944 kubelet[2776]: I1213 02:10:41.364563 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfzkm\" (UniqueName: \"kubernetes.io/projected/54a2602e-3498-4e07-85b6-283e159dff27-kube-api-access-hfzkm\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.364944 kubelet[2776]: I1213 02:10:41.364600 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/54a2602e-3498-4e07-85b6-283e159dff27-var-lib-calico\") pod \"calico-node-7z8r4\" (UID: \"54a2602e-3498-4e07-85b6-283e159dff27\") " pod="calico-system/calico-node-7z8r4" Dec 13 02:10:41.448160 kubelet[2776]: I1213 02:10:41.446967 2776 topology_manager.go:215] "Topology Admit Handler" podUID="883deafc-f1bd-4933-895f-1acfab27941b" podNamespace="calico-system" podName="csi-node-driver-tz9nk" Dec 13 02:10:41.448160 kubelet[2776]: E1213 02:10:41.447984 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:41.466291 kubelet[2776]: I1213 02:10:41.465211 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/883deafc-f1bd-4933-895f-1acfab27941b-varrun\") pod \"csi-node-driver-tz9nk\" (UID: \"883deafc-f1bd-4933-895f-1acfab27941b\") " pod="calico-system/csi-node-driver-tz9nk" Dec 13 02:10:41.466291 kubelet[2776]: I1213 02:10:41.465267 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/883deafc-f1bd-4933-895f-1acfab27941b-socket-dir\") pod \"csi-node-driver-tz9nk\" (UID: \"883deafc-f1bd-4933-895f-1acfab27941b\") " pod="calico-system/csi-node-driver-tz9nk" Dec 13 02:10:41.466291 kubelet[2776]: I1213 02:10:41.465288 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/883deafc-f1bd-4933-895f-1acfab27941b-registration-dir\") pod \"csi-node-driver-tz9nk\" (UID: \"883deafc-f1bd-4933-895f-1acfab27941b\") " pod="calico-system/csi-node-driver-tz9nk" Dec 13 02:10:41.466291 kubelet[2776]: I1213 02:10:41.465352 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx95c\" (UniqueName: \"kubernetes.io/projected/883deafc-f1bd-4933-895f-1acfab27941b-kube-api-access-kx95c\") pod \"csi-node-driver-tz9nk\" (UID: \"883deafc-f1bd-4933-895f-1acfab27941b\") " pod="calico-system/csi-node-driver-tz9nk" Dec 13 02:10:41.466291 kubelet[2776]: I1213 02:10:41.465415 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/883deafc-f1bd-4933-895f-1acfab27941b-kubelet-dir\") pod \"csi-node-driver-tz9nk\" (UID: \"883deafc-f1bd-4933-895f-1acfab27941b\") " pod="calico-system/csi-node-driver-tz9nk" Dec 13 02:10:41.466291 kubelet[2776]: E1213 02:10:41.466271 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.466505 kubelet[2776]: W1213 02:10:41.466296 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.468092 kubelet[2776]: E1213 02:10:41.467074 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.468092 kubelet[2776]: E1213 02:10:41.467543 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.468092 kubelet[2776]: W1213 02:10:41.467558 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.468092 kubelet[2776]: E1213 02:10:41.467574 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.468092 kubelet[2776]: E1213 02:10:41.467808 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.468092 kubelet[2776]: W1213 02:10:41.467818 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.468092 kubelet[2776]: E1213 02:10:41.467829 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.468338 kubelet[2776]: E1213 02:10:41.468123 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.468338 kubelet[2776]: W1213 02:10:41.468134 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.468338 kubelet[2776]: E1213 02:10:41.468144 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.468338 kubelet[2776]: E1213 02:10:41.468323 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.468338 kubelet[2776]: W1213 02:10:41.468331 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.468338 kubelet[2776]: E1213 02:10:41.468340 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.468498 kubelet[2776]: E1213 02:10:41.468474 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.468498 kubelet[2776]: W1213 02:10:41.468488 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.468498 kubelet[2776]: E1213 02:10:41.468497 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.469080 kubelet[2776]: E1213 02:10:41.468631 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.469080 kubelet[2776]: W1213 02:10:41.468642 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.469080 kubelet[2776]: E1213 02:10:41.468650 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.469080 kubelet[2776]: E1213 02:10:41.468824 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.469080 kubelet[2776]: W1213 02:10:41.468832 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.469080 kubelet[2776]: E1213 02:10:41.468841 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.469732 kubelet[2776]: E1213 02:10:41.469708 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.469732 kubelet[2776]: W1213 02:10:41.469724 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.469732 kubelet[2776]: E1213 02:10:41.469735 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.474007 kubelet[2776]: E1213 02:10:41.473987 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.474198 kubelet[2776]: W1213 02:10:41.474146 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.474198 kubelet[2776]: E1213 02:10:41.474167 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.502909 kubelet[2776]: E1213 02:10:41.502757 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.503338 kubelet[2776]: W1213 02:10:41.503047 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.504285 kubelet[2776]: E1213 02:10:41.504136 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.522575 containerd[1477]: time="2024-12-13T02:10:41.522162186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797b989b68-lsz8x,Uid:f273f1e3-57e6-4246-baaf-960d3e12e9b1,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:41.566358 containerd[1477]: time="2024-12-13T02:10:41.565584731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:41.566358 containerd[1477]: time="2024-12-13T02:10:41.565659052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:41.566358 containerd[1477]: time="2024-12-13T02:10:41.565677052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:41.566358 containerd[1477]: time="2024-12-13T02:10:41.565793973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:41.567249 kubelet[2776]: E1213 02:10:41.567111 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.567249 kubelet[2776]: W1213 02:10:41.567147 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.567249 kubelet[2776]: E1213 02:10:41.567169 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.567895 kubelet[2776]: E1213 02:10:41.567878 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.568213 kubelet[2776]: W1213 02:10:41.568070 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.568213 kubelet[2776]: E1213 02:10:41.568098 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.569659 kubelet[2776]: E1213 02:10:41.569545 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.569659 kubelet[2776]: W1213 02:10:41.569561 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.569992 kubelet[2776]: E1213 02:10:41.569903 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.569992 kubelet[2776]: W1213 02:10:41.569920 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.570602 kubelet[2776]: E1213 02:10:41.569951 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.570669 kubelet[2776]: E1213 02:10:41.570466 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.570669 kubelet[2776]: W1213 02:10:41.570654 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.570669 kubelet[2776]: E1213 02:10:41.570667 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.570899 kubelet[2776]: E1213 02:10:41.570476 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.572778 kubelet[2776]: E1213 02:10:41.571516 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.572778 kubelet[2776]: W1213 02:10:41.571533 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.572778 kubelet[2776]: E1213 02:10:41.571549 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.572778 kubelet[2776]: E1213 02:10:41.571743 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.572778 kubelet[2776]: W1213 02:10:41.571752 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.572778 kubelet[2776]: E1213 02:10:41.571760 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.573484 kubelet[2776]: E1213 02:10:41.573459 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.573484 kubelet[2776]: W1213 02:10:41.573477 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.573484 kubelet[2776]: E1213 02:10:41.573516 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.574489 kubelet[2776]: E1213 02:10:41.574041 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.574489 kubelet[2776]: W1213 02:10:41.574202 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.574489 kubelet[2776]: E1213 02:10:41.574246 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.574983 kubelet[2776]: E1213 02:10:41.574950 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.574983 kubelet[2776]: W1213 02:10:41.574975 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.574983 kubelet[2776]: E1213 02:10:41.575015 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.575889 kubelet[2776]: E1213 02:10:41.575511 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.575889 kubelet[2776]: W1213 02:10:41.575526 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.575889 kubelet[2776]: E1213 02:10:41.575560 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.575889 kubelet[2776]: E1213 02:10:41.575749 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.575889 kubelet[2776]: W1213 02:10:41.575761 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.575889 kubelet[2776]: E1213 02:10:41.575919 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.576847 kubelet[2776]: E1213 02:10:41.576805 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.576847 kubelet[2776]: W1213 02:10:41.576833 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.576847 kubelet[2776]: E1213 02:10:41.576854 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.577382 kubelet[2776]: E1213 02:10:41.577230 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.577382 kubelet[2776]: W1213 02:10:41.577245 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.577382 kubelet[2776]: E1213 02:10:41.577265 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.579597 kubelet[2776]: E1213 02:10:41.579025 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.579597 kubelet[2776]: W1213 02:10:41.579044 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.579597 kubelet[2776]: E1213 02:10:41.579110 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.579597 kubelet[2776]: E1213 02:10:41.579426 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.579597 kubelet[2776]: W1213 02:10:41.579439 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.579597 kubelet[2776]: E1213 02:10:41.579571 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.580291 kubelet[2776]: E1213 02:10:41.579875 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.580291 kubelet[2776]: W1213 02:10:41.579887 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.580291 kubelet[2776]: E1213 02:10:41.579929 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.580291 kubelet[2776]: E1213 02:10:41.580138 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.580291 kubelet[2776]: W1213 02:10:41.580149 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.580291 kubelet[2776]: E1213 02:10:41.580247 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.580529 kubelet[2776]: E1213 02:10:41.580518 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.580586 kubelet[2776]: W1213 02:10:41.580575 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.580659 kubelet[2776]: E1213 02:10:41.580637 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.580849 kubelet[2776]: E1213 02:10:41.580826 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.580849 kubelet[2776]: W1213 02:10:41.580842 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.581321 kubelet[2776]: E1213 02:10:41.581195 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.581625 kubelet[2776]: E1213 02:10:41.581608 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.581746 kubelet[2776]: W1213 02:10:41.581729 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.582077 kubelet[2776]: E1213 02:10:41.581807 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.582623 kubelet[2776]: E1213 02:10:41.582602 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.582623 kubelet[2776]: W1213 02:10:41.582620 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.582725 kubelet[2776]: E1213 02:10:41.582639 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.584648 kubelet[2776]: E1213 02:10:41.583466 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.584648 kubelet[2776]: W1213 02:10:41.583482 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.584648 kubelet[2776]: E1213 02:10:41.583544 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.584648 kubelet[2776]: E1213 02:10:41.584127 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.584648 kubelet[2776]: W1213 02:10:41.584143 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.584648 kubelet[2776]: E1213 02:10:41.584156 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.585406 kubelet[2776]: E1213 02:10:41.585373 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.585406 kubelet[2776]: W1213 02:10:41.585395 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.585406 kubelet[2776]: E1213 02:10:41.585408 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.598277 systemd[1]: Started cri-containerd-0baedded5e3a29a90d7d2e77f48e0bac1f4eab3d827f906269423c12baed9543.scope - libcontainer container 0baedded5e3a29a90d7d2e77f48e0bac1f4eab3d827f906269423c12baed9543. Dec 13 02:10:41.600957 kubelet[2776]: E1213 02:10:41.600920 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:41.601118 kubelet[2776]: W1213 02:10:41.601103 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:41.601210 kubelet[2776]: E1213 02:10:41.601191 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:41.614533 containerd[1477]: time="2024-12-13T02:10:41.614478419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7z8r4,Uid:54a2602e-3498-4e07-85b6-283e159dff27,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:41.655567 containerd[1477]: time="2024-12-13T02:10:41.655519816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-797b989b68-lsz8x,Uid:f273f1e3-57e6-4246-baaf-960d3e12e9b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"0baedded5e3a29a90d7d2e77f48e0bac1f4eab3d827f906269423c12baed9543\"" Dec 13 02:10:41.658570 containerd[1477]: time="2024-12-13T02:10:41.658527651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 02:10:41.660921 containerd[1477]: time="2024-12-13T02:10:41.660313632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:10:41.661016 containerd[1477]: time="2024-12-13T02:10:41.660951400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:10:41.661016 containerd[1477]: time="2024-12-13T02:10:41.660984440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:41.661431 containerd[1477]: time="2024-12-13T02:10:41.661344244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:10:41.690380 systemd[1]: Started cri-containerd-f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e.scope - libcontainer container f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e. Dec 13 02:10:41.739945 containerd[1477]: time="2024-12-13T02:10:41.739828277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7z8r4,Uid:54a2602e-3498-4e07-85b6-283e159dff27,Namespace:calico-system,Attempt:0,} returns sandbox id \"f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e\"" Dec 13 02:10:43.167352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990048708.mount: Deactivated successfully. Dec 13 02:10:43.627931 kubelet[2776]: E1213 02:10:43.626117 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:44.338379 containerd[1477]: time="2024-12-13T02:10:44.338321514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:44.339807 containerd[1477]: time="2024-12-13T02:10:44.339756689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 02:10:44.340861 containerd[1477]: time="2024-12-13T02:10:44.340821901Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:44.343050 containerd[1477]: time="2024-12-13T02:10:44.342840803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:44.343545 containerd[1477]: time="2024-12-13T02:10:44.343511531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.684941759s" Dec 13 02:10:44.343545 containerd[1477]: time="2024-12-13T02:10:44.343542891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 02:10:44.344808 containerd[1477]: time="2024-12-13T02:10:44.344773705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 02:10:44.358306 containerd[1477]: time="2024-12-13T02:10:44.358209933Z" level=info msg="CreateContainer within sandbox \"0baedded5e3a29a90d7d2e77f48e0bac1f4eab3d827f906269423c12baed9543\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 02:10:44.385080 containerd[1477]: time="2024-12-13T02:10:44.385008548Z" level=info msg="CreateContainer within sandbox \"0baedded5e3a29a90d7d2e77f48e0bac1f4eab3d827f906269423c12baed9543\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"61bbbf915f323401b5becbb9566d6b52b11b1dc099544e0a89d2b41a9b80c985\"" Dec 13 02:10:44.386018 containerd[1477]: time="2024-12-13T02:10:44.385569475Z" level=info msg="StartContainer for \"61bbbf915f323401b5becbb9566d6b52b11b1dc099544e0a89d2b41a9b80c985\"" Dec 13 02:10:44.429252 systemd[1]: Started cri-containerd-61bbbf915f323401b5becbb9566d6b52b11b1dc099544e0a89d2b41a9b80c985.scope - libcontainer container 61bbbf915f323401b5becbb9566d6b52b11b1dc099544e0a89d2b41a9b80c985. Dec 13 02:10:44.519894 containerd[1477]: time="2024-12-13T02:10:44.519784915Z" level=info msg="StartContainer for \"61bbbf915f323401b5becbb9566d6b52b11b1dc099544e0a89d2b41a9b80c985\" returns successfully" Dec 13 02:10:44.772480 kubelet[2776]: I1213 02:10:44.772382 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-797b989b68-lsz8x" podStartSLOduration=1.08468067 podStartE2EDuration="3.77236642s" podCreationTimestamp="2024-12-13 02:10:41 +0000 UTC" firstStartedPulling="2024-12-13 02:10:41.656993794 +0000 UTC m=+24.136035372" lastFinishedPulling="2024-12-13 02:10:44.344679544 +0000 UTC m=+26.823721122" observedRunningTime="2024-12-13 02:10:44.771578972 +0000 UTC m=+27.250620550" watchObservedRunningTime="2024-12-13 02:10:44.77236642 +0000 UTC m=+27.251407998" Dec 13 02:10:44.775792 kubelet[2776]: E1213 02:10:44.775744 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.775792 kubelet[2776]: W1213 02:10:44.775779 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.776198 kubelet[2776]: E1213 02:10:44.775801 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.776198 kubelet[2776]: E1213 02:10:44.776115 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.776198 kubelet[2776]: W1213 02:10:44.776127 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.776198 kubelet[2776]: E1213 02:10:44.776139 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.776485 kubelet[2776]: E1213 02:10:44.776472 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.776485 kubelet[2776]: W1213 02:10:44.776484 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.776531 kubelet[2776]: E1213 02:10:44.776501 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.776727 kubelet[2776]: E1213 02:10:44.776711 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.776727 kubelet[2776]: W1213 02:10:44.776724 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.778187 kubelet[2776]: E1213 02:10:44.776734 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.778559 kubelet[2776]: E1213 02:10:44.778486 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.778559 kubelet[2776]: W1213 02:10:44.778506 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.778559 kubelet[2776]: E1213 02:10:44.778518 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.779078 kubelet[2776]: E1213 02:10:44.778976 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.779078 kubelet[2776]: W1213 02:10:44.778990 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.779078 kubelet[2776]: E1213 02:10:44.779011 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.779855 kubelet[2776]: E1213 02:10:44.779650 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.779855 kubelet[2776]: W1213 02:10:44.779708 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.779855 kubelet[2776]: E1213 02:10:44.779743 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.780463 kubelet[2776]: E1213 02:10:44.780327 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.780463 kubelet[2776]: W1213 02:10:44.780342 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.780463 kubelet[2776]: E1213 02:10:44.780353 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.780805 kubelet[2776]: E1213 02:10:44.780739 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.780805 kubelet[2776]: W1213 02:10:44.780751 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.780805 kubelet[2776]: E1213 02:10:44.780763 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.781273 kubelet[2776]: E1213 02:10:44.781158 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.781273 kubelet[2776]: W1213 02:10:44.781173 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.781273 kubelet[2776]: E1213 02:10:44.781216 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.781681 kubelet[2776]: E1213 02:10:44.781608 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.781681 kubelet[2776]: W1213 02:10:44.781621 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.781681 kubelet[2776]: E1213 02:10:44.781632 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.782359 kubelet[2776]: E1213 02:10:44.782265 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.782359 kubelet[2776]: W1213 02:10:44.782280 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.782359 kubelet[2776]: E1213 02:10:44.782292 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.782790 kubelet[2776]: E1213 02:10:44.782718 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.782790 kubelet[2776]: W1213 02:10:44.782732 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.782790 kubelet[2776]: E1213 02:10:44.782745 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.783288 kubelet[2776]: E1213 02:10:44.783192 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.783288 kubelet[2776]: W1213 02:10:44.783209 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.783288 kubelet[2776]: E1213 02:10:44.783220 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.783817 kubelet[2776]: E1213 02:10:44.783690 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.783817 kubelet[2776]: W1213 02:10:44.783705 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.783817 kubelet[2776]: E1213 02:10:44.783747 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.799680 kubelet[2776]: E1213 02:10:44.799648 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.799988 kubelet[2776]: W1213 02:10:44.799857 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.799988 kubelet[2776]: E1213 02:10:44.799889 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.800494 kubelet[2776]: E1213 02:10:44.800373 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.800494 kubelet[2776]: W1213 02:10:44.800391 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.800494 kubelet[2776]: E1213 02:10:44.800415 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.801601 kubelet[2776]: E1213 02:10:44.801356 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.801601 kubelet[2776]: W1213 02:10:44.801379 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.801601 kubelet[2776]: E1213 02:10:44.801401 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.802050 kubelet[2776]: E1213 02:10:44.801928 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.802050 kubelet[2776]: W1213 02:10:44.801978 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.802050 kubelet[2776]: E1213 02:10:44.802032 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.802594 kubelet[2776]: E1213 02:10:44.802476 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.802594 kubelet[2776]: W1213 02:10:44.802515 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.802594 kubelet[2776]: E1213 02:10:44.802563 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.803282 kubelet[2776]: E1213 02:10:44.803108 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.803282 kubelet[2776]: W1213 02:10:44.803125 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.803282 kubelet[2776]: E1213 02:10:44.803249 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.803927 kubelet[2776]: E1213 02:10:44.803824 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.803927 kubelet[2776]: W1213 02:10:44.803857 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.803927 kubelet[2776]: E1213 02:10:44.803907 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.804806 kubelet[2776]: E1213 02:10:44.804786 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.804955 kubelet[2776]: W1213 02:10:44.804860 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.805160 kubelet[2776]: E1213 02:10:44.805097 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.805627 kubelet[2776]: E1213 02:10:44.805608 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.805835 kubelet[2776]: W1213 02:10:44.805675 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.805835 kubelet[2776]: E1213 02:10:44.805732 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.806460 kubelet[2776]: E1213 02:10:44.806371 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.806460 kubelet[2776]: W1213 02:10:44.806393 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.806837 kubelet[2776]: E1213 02:10:44.806695 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.808168 kubelet[2776]: E1213 02:10:44.807274 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.808168 kubelet[2776]: W1213 02:10:44.807289 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.808168 kubelet[2776]: E1213 02:10:44.807374 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.809192 kubelet[2776]: E1213 02:10:44.808325 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.809192 kubelet[2776]: W1213 02:10:44.808348 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.809437 kubelet[2776]: E1213 02:10:44.809409 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.809437 kubelet[2776]: W1213 02:10:44.809432 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.809794 kubelet[2776]: E1213 02:10:44.809696 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.809794 kubelet[2776]: W1213 02:10:44.809712 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.809794 kubelet[2776]: E1213 02:10:44.809725 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.810405 kubelet[2776]: E1213 02:10:44.810371 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.810405 kubelet[2776]: W1213 02:10:44.810405 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.810505 kubelet[2776]: E1213 02:10:44.810419 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.810505 kubelet[2776]: E1213 02:10:44.810445 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.811217 kubelet[2776]: E1213 02:10:44.810672 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.811217 kubelet[2776]: W1213 02:10:44.810686 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.811217 kubelet[2776]: E1213 02:10:44.810698 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.811217 kubelet[2776]: E1213 02:10:44.810888 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.811896 kubelet[2776]: E1213 02:10:44.811826 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.811896 kubelet[2776]: W1213 02:10:44.811850 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.811896 kubelet[2776]: E1213 02:10:44.811865 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.812210 kubelet[2776]: E1213 02:10:44.812187 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:44.812210 kubelet[2776]: W1213 02:10:44.812203 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:44.812282 kubelet[2776]: E1213 02:10:44.812217 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:44.902301 update_engine[1455]: I20241213 02:10:44.902124 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:44.902800 update_engine[1455]: I20241213 02:10:44.902467 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:44.902800 update_engine[1455]: I20241213 02:10:44.902729 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:44.903616 update_engine[1455]: E20241213 02:10:44.903559 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:44.903697 update_engine[1455]: I20241213 02:10:44.903654 1455 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 02:10:45.629693 kubelet[2776]: E1213 02:10:45.627978 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:45.750715 kubelet[2776]: I1213 02:10:45.750045 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:10:45.793132 kubelet[2776]: E1213 02:10:45.792978 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.793132 kubelet[2776]: W1213 02:10:45.793012 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.793132 kubelet[2776]: E1213 02:10:45.793086 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.793840 kubelet[2776]: E1213 02:10:45.793440 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.793840 kubelet[2776]: W1213 02:10:45.793455 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.793840 kubelet[2776]: E1213 02:10:45.793511 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.793840 kubelet[2776]: E1213 02:10:45.793792 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.793840 kubelet[2776]: W1213 02:10:45.793806 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.793840 kubelet[2776]: E1213 02:10:45.793836 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.794197 kubelet[2776]: E1213 02:10:45.794160 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.794197 kubelet[2776]: W1213 02:10:45.794173 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.794300 kubelet[2776]: E1213 02:10:45.794203 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.794711 kubelet[2776]: E1213 02:10:45.794664 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.794711 kubelet[2776]: W1213 02:10:45.794689 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.794711 kubelet[2776]: E1213 02:10:45.794705 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.795078 kubelet[2776]: E1213 02:10:45.795033 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.795192 kubelet[2776]: W1213 02:10:45.795114 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.795192 kubelet[2776]: E1213 02:10:45.795164 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.795642 kubelet[2776]: E1213 02:10:45.795614 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.795642 kubelet[2776]: W1213 02:10:45.795638 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.795790 kubelet[2776]: E1213 02:10:45.795666 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.796137 kubelet[2776]: E1213 02:10:45.796111 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.796137 kubelet[2776]: W1213 02:10:45.796134 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.796269 kubelet[2776]: E1213 02:10:45.796151 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.796533 kubelet[2776]: E1213 02:10:45.796498 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.796533 kubelet[2776]: W1213 02:10:45.796532 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.796685 kubelet[2776]: E1213 02:10:45.796557 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.796957 kubelet[2776]: E1213 02:10:45.796931 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.796957 kubelet[2776]: W1213 02:10:45.796955 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.797123 kubelet[2776]: E1213 02:10:45.796972 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.797349 kubelet[2776]: E1213 02:10:45.797326 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.797349 kubelet[2776]: W1213 02:10:45.797346 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.797472 kubelet[2776]: E1213 02:10:45.797363 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.797638 kubelet[2776]: E1213 02:10:45.797618 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.797638 kubelet[2776]: W1213 02:10:45.797635 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.797749 kubelet[2776]: E1213 02:10:45.797650 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.797939 kubelet[2776]: E1213 02:10:45.797919 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.797939 kubelet[2776]: W1213 02:10:45.797937 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.798099 kubelet[2776]: E1213 02:10:45.797951 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.798348 kubelet[2776]: E1213 02:10:45.798326 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.798348 kubelet[2776]: W1213 02:10:45.798346 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.798458 kubelet[2776]: E1213 02:10:45.798362 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.798616 kubelet[2776]: E1213 02:10:45.798597 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.798616 kubelet[2776]: W1213 02:10:45.798614 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.798737 kubelet[2776]: E1213 02:10:45.798628 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.811884 kubelet[2776]: E1213 02:10:45.811846 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.811884 kubelet[2776]: W1213 02:10:45.811877 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.812079 kubelet[2776]: E1213 02:10:45.811901 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.812508 kubelet[2776]: E1213 02:10:45.812488 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.812508 kubelet[2776]: W1213 02:10:45.812509 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.812913 kubelet[2776]: E1213 02:10:45.812529 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.813263 kubelet[2776]: E1213 02:10:45.813242 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.813336 kubelet[2776]: W1213 02:10:45.813261 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.813442 kubelet[2776]: E1213 02:10:45.813425 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.813780 kubelet[2776]: E1213 02:10:45.813752 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.813780 kubelet[2776]: W1213 02:10:45.813769 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.813889 kubelet[2776]: E1213 02:10:45.813797 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.814242 kubelet[2776]: E1213 02:10:45.814220 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.814242 kubelet[2776]: W1213 02:10:45.814237 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.814445 kubelet[2776]: E1213 02:10:45.814258 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.814724 kubelet[2776]: E1213 02:10:45.814625 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.814724 kubelet[2776]: W1213 02:10:45.814642 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.814724 kubelet[2776]: E1213 02:10:45.814668 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.815692 kubelet[2776]: E1213 02:10:45.814857 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.815692 kubelet[2776]: W1213 02:10:45.814871 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.815692 kubelet[2776]: E1213 02:10:45.814882 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.815692 kubelet[2776]: E1213 02:10:45.815222 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.815692 kubelet[2776]: W1213 02:10:45.815234 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.815692 kubelet[2776]: E1213 02:10:45.815254 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.815692 kubelet[2776]: E1213 02:10:45.815453 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.815692 kubelet[2776]: W1213 02:10:45.815462 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.815692 kubelet[2776]: E1213 02:10:45.815479 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.815692 kubelet[2776]: E1213 02:10:45.815674 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.816009 kubelet[2776]: W1213 02:10:45.815684 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.816009 kubelet[2776]: E1213 02:10:45.815694 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.816009 kubelet[2776]: E1213 02:10:45.815875 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.816009 kubelet[2776]: W1213 02:10:45.815884 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.816009 kubelet[2776]: E1213 02:10:45.815899 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.816655 kubelet[2776]: E1213 02:10:45.816636 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.816655 kubelet[2776]: W1213 02:10:45.816653 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.816740 kubelet[2776]: E1213 02:10:45.816685 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.817069 kubelet[2776]: E1213 02:10:45.817035 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.817122 kubelet[2776]: W1213 02:10:45.817101 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.817151 kubelet[2776]: E1213 02:10:45.817124 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.817992 kubelet[2776]: E1213 02:10:45.817754 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.817992 kubelet[2776]: W1213 02:10:45.817800 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.817992 kubelet[2776]: E1213 02:10:45.817817 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.818176 kubelet[2776]: E1213 02:10:45.818078 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.818176 kubelet[2776]: W1213 02:10:45.818089 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.818176 kubelet[2776]: E1213 02:10:45.818099 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.819030 kubelet[2776]: E1213 02:10:45.818463 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.819030 kubelet[2776]: W1213 02:10:45.818485 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.819030 kubelet[2776]: E1213 02:10:45.818563 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.819030 kubelet[2776]: E1213 02:10:45.818701 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.819030 kubelet[2776]: W1213 02:10:45.818710 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.819030 kubelet[2776]: E1213 02:10:45.818755 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:45.819030 kubelet[2776]: E1213 02:10:45.818933 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 02:10:45.819030 kubelet[2776]: W1213 02:10:45.818942 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 02:10:45.819030 kubelet[2776]: E1213 02:10:45.818952 2776 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 02:10:46.040167 containerd[1477]: time="2024-12-13T02:10:46.040101880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:46.041481 containerd[1477]: time="2024-12-13T02:10:46.041346093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 02:10:46.042416 containerd[1477]: time="2024-12-13T02:10:46.042355064Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:46.044982 containerd[1477]: time="2024-12-13T02:10:46.044694169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:46.045525 containerd[1477]: time="2024-12-13T02:10:46.045490538Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.700685112s" Dec 13 02:10:46.045583 containerd[1477]: time="2024-12-13T02:10:46.045525578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 02:10:46.050361 containerd[1477]: time="2024-12-13T02:10:46.050314109Z" level=info msg="CreateContainer within sandbox \"f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 02:10:46.073553 containerd[1477]: time="2024-12-13T02:10:46.073477876Z" level=info msg="CreateContainer within sandbox \"f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408\"" Dec 13 02:10:46.074522 containerd[1477]: time="2024-12-13T02:10:46.074346485Z" level=info msg="StartContainer for \"3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408\"" Dec 13 02:10:46.113564 systemd[1]: Started cri-containerd-3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408.scope - libcontainer container 3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408. Dec 13 02:10:46.144426 containerd[1477]: time="2024-12-13T02:10:46.144296471Z" level=info msg="StartContainer for \"3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408\" returns successfully" Dec 13 02:10:46.178356 systemd[1]: cri-containerd-3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408.scope: Deactivated successfully. Dec 13 02:10:46.325506 containerd[1477]: time="2024-12-13T02:10:46.325000597Z" level=info msg="shim disconnected" id=3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408 namespace=k8s.io Dec 13 02:10:46.325506 containerd[1477]: time="2024-12-13T02:10:46.325078478Z" level=warning msg="cleaning up after shim disconnected" id=3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408 namespace=k8s.io Dec 13 02:10:46.325506 containerd[1477]: time="2024-12-13T02:10:46.325088798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:10:46.351002 systemd[1]: run-containerd-runc-k8s.io-3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408-runc.wvwKFB.mount: Deactivated successfully. Dec 13 02:10:46.351277 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cc7e00b42bd7092f544ce5042444b4fc37c31b3d1496391ab3eafab6f605408-rootfs.mount: Deactivated successfully. Dec 13 02:10:46.760600 containerd[1477]: time="2024-12-13T02:10:46.760545360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 02:10:47.626577 kubelet[2776]: E1213 02:10:47.626088 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:49.626511 kubelet[2776]: E1213 02:10:49.626232 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:51.101588 containerd[1477]: time="2024-12-13T02:10:51.100281965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:51.101588 containerd[1477]: time="2024-12-13T02:10:51.101512857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 02:10:51.102136 containerd[1477]: time="2024-12-13T02:10:51.102019902Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:51.104558 containerd[1477]: time="2024-12-13T02:10:51.104511487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:51.105906 containerd[1477]: time="2024-12-13T02:10:51.105817340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.344970536s" Dec 13 02:10:51.106067 containerd[1477]: time="2024-12-13T02:10:51.106026022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 02:10:51.110615 containerd[1477]: time="2024-12-13T02:10:51.110559106Z" level=info msg="CreateContainer within sandbox \"f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 02:10:51.127882 containerd[1477]: time="2024-12-13T02:10:51.127834796Z" level=info msg="CreateContainer within sandbox \"f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655\"" Dec 13 02:10:51.129647 containerd[1477]: time="2024-12-13T02:10:51.128515763Z" level=info msg="StartContainer for \"5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655\"" Dec 13 02:10:51.165270 systemd[1]: Started cri-containerd-5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655.scope - libcontainer container 5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655. Dec 13 02:10:51.196860 containerd[1477]: time="2024-12-13T02:10:51.196798475Z" level=info msg="StartContainer for \"5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655\" returns successfully" Dec 13 02:10:51.627081 kubelet[2776]: E1213 02:10:51.626633 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:51.723067 systemd[1]: cri-containerd-5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655.scope: Deactivated successfully. Dec 13 02:10:51.750340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655-rootfs.mount: Deactivated successfully. Dec 13 02:10:51.813377 kubelet[2776]: I1213 02:10:51.813308 2776 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:10:51.848088 containerd[1477]: time="2024-12-13T02:10:51.846736429Z" level=info msg="shim disconnected" id=5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655 namespace=k8s.io Dec 13 02:10:51.848482 containerd[1477]: time="2024-12-13T02:10:51.848295004Z" level=warning msg="cleaning up after shim disconnected" id=5504f273299a81d939fcf673d2a5636de9b43b3789ba9ee7a75b0008f4141655 namespace=k8s.io Dec 13 02:10:51.848482 containerd[1477]: time="2024-12-13T02:10:51.848325804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:10:51.854088 kubelet[2776]: I1213 02:10:51.851279 2776 topology_manager.go:215] "Topology Admit Handler" podUID="be50f835-bcf1-4511-9cef-2190b068c860" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7qwdw" Dec 13 02:10:51.859347 kubelet[2776]: I1213 02:10:51.859297 2776 topology_manager.go:215] "Topology Admit Handler" podUID="8840a18a-f878-4077-b87c-3b4317d6d897" podNamespace="calico-system" podName="calico-kube-controllers-7b5bbd5bc9-stdt4" Dec 13 02:10:51.868293 systemd[1]: Created slice kubepods-burstable-podbe50f835_bcf1_4511_9cef_2190b068c860.slice - libcontainer container kubepods-burstable-podbe50f835_bcf1_4511_9cef_2190b068c860.slice. Dec 13 02:10:51.871087 kubelet[2776]: I1213 02:10:51.870562 2776 topology_manager.go:215] "Topology Admit Handler" podUID="fe0d1aa3-9d9e-446d-9089-c44a1868215f" podNamespace="calico-apiserver" podName="calico-apiserver-5b74fb4656-ldnns" Dec 13 02:10:51.875219 kubelet[2776]: I1213 02:10:51.875178 2776 topology_manager.go:215] "Topology Admit Handler" podUID="b54c01c8-e7f0-44bb-a06d-6790acc7d0cd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-bq88m" Dec 13 02:10:51.881071 systemd[1]: Created slice kubepods-besteffort-pod8840a18a_f878_4077_b87c_3b4317d6d897.slice - libcontainer container kubepods-besteffort-pod8840a18a_f878_4077_b87c_3b4317d6d897.slice. Dec 13 02:10:51.883881 kubelet[2776]: I1213 02:10:51.881295 2776 topology_manager.go:215] "Topology Admit Handler" podUID="8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279" podNamespace="calico-apiserver" podName="calico-apiserver-5b74fb4656-ggqmp" Dec 13 02:10:51.897703 systemd[1]: Created slice kubepods-besteffort-podfe0d1aa3_9d9e_446d_9089_c44a1868215f.slice - libcontainer container kubepods-besteffort-podfe0d1aa3_9d9e_446d_9089_c44a1868215f.slice. Dec 13 02:10:51.908099 systemd[1]: Created slice kubepods-burstable-podb54c01c8_e7f0_44bb_a06d_6790acc7d0cd.slice - libcontainer container kubepods-burstable-podb54c01c8_e7f0_44bb_a06d_6790acc7d0cd.slice. Dec 13 02:10:51.916173 systemd[1]: Created slice kubepods-besteffort-pod8b4f4cc6_054d_4a2d_ad48_c71dc0a6d279.slice - libcontainer container kubepods-besteffort-pod8b4f4cc6_054d_4a2d_ad48_c71dc0a6d279.slice. Dec 13 02:10:51.960833 kubelet[2776]: I1213 02:10:51.960777 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrwmv\" (UniqueName: \"kubernetes.io/projected/8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279-kube-api-access-wrwmv\") pod \"calico-apiserver-5b74fb4656-ggqmp\" (UID: \"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279\") " pod="calico-apiserver/calico-apiserver-5b74fb4656-ggqmp" Dec 13 02:10:51.960833 kubelet[2776]: I1213 02:10:51.960820 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b24kh\" (UniqueName: \"kubernetes.io/projected/be50f835-bcf1-4511-9cef-2190b068c860-kube-api-access-b24kh\") pod \"coredns-7db6d8ff4d-7qwdw\" (UID: \"be50f835-bcf1-4511-9cef-2190b068c860\") " pod="kube-system/coredns-7db6d8ff4d-7qwdw" Dec 13 02:10:51.960833 kubelet[2776]: I1213 02:10:51.960841 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5flk9\" (UniqueName: \"kubernetes.io/projected/b54c01c8-e7f0-44bb-a06d-6790acc7d0cd-kube-api-access-5flk9\") pod \"coredns-7db6d8ff4d-bq88m\" (UID: \"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd\") " pod="kube-system/coredns-7db6d8ff4d-bq88m" Dec 13 02:10:51.961178 kubelet[2776]: I1213 02:10:51.960862 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8840a18a-f878-4077-b87c-3b4317d6d897-tigera-ca-bundle\") pod \"calico-kube-controllers-7b5bbd5bc9-stdt4\" (UID: \"8840a18a-f878-4077-b87c-3b4317d6d897\") " pod="calico-system/calico-kube-controllers-7b5bbd5bc9-stdt4" Dec 13 02:10:51.961178 kubelet[2776]: I1213 02:10:51.960882 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be50f835-bcf1-4511-9cef-2190b068c860-config-volume\") pod \"coredns-7db6d8ff4d-7qwdw\" (UID: \"be50f835-bcf1-4511-9cef-2190b068c860\") " pod="kube-system/coredns-7db6d8ff4d-7qwdw" Dec 13 02:10:51.961178 kubelet[2776]: I1213 02:10:51.960901 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fe0d1aa3-9d9e-446d-9089-c44a1868215f-calico-apiserver-certs\") pod \"calico-apiserver-5b74fb4656-ldnns\" (UID: \"fe0d1aa3-9d9e-446d-9089-c44a1868215f\") " pod="calico-apiserver/calico-apiserver-5b74fb4656-ldnns" Dec 13 02:10:51.961178 kubelet[2776]: I1213 02:10:51.960916 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwsvr\" (UniqueName: \"kubernetes.io/projected/fe0d1aa3-9d9e-446d-9089-c44a1868215f-kube-api-access-cwsvr\") pod \"calico-apiserver-5b74fb4656-ldnns\" (UID: \"fe0d1aa3-9d9e-446d-9089-c44a1868215f\") " pod="calico-apiserver/calico-apiserver-5b74fb4656-ldnns" Dec 13 02:10:51.961178 kubelet[2776]: I1213 02:10:51.960933 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279-calico-apiserver-certs\") pod \"calico-apiserver-5b74fb4656-ggqmp\" (UID: \"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279\") " pod="calico-apiserver/calico-apiserver-5b74fb4656-ggqmp" Dec 13 02:10:51.961430 kubelet[2776]: I1213 02:10:51.960948 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq7j9\" (UniqueName: \"kubernetes.io/projected/8840a18a-f878-4077-b87c-3b4317d6d897-kube-api-access-kq7j9\") pod \"calico-kube-controllers-7b5bbd5bc9-stdt4\" (UID: \"8840a18a-f878-4077-b87c-3b4317d6d897\") " pod="calico-system/calico-kube-controllers-7b5bbd5bc9-stdt4" Dec 13 02:10:51.961430 kubelet[2776]: I1213 02:10:51.960967 2776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b54c01c8-e7f0-44bb-a06d-6790acc7d0cd-config-volume\") pod \"coredns-7db6d8ff4d-bq88m\" (UID: \"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd\") " pod="kube-system/coredns-7db6d8ff4d-bq88m" Dec 13 02:10:52.179083 containerd[1477]: time="2024-12-13T02:10:52.178993831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7qwdw,Uid:be50f835-bcf1-4511-9cef-2190b068c860,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:52.196924 containerd[1477]: time="2024-12-13T02:10:52.196579241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bbd5bc9-stdt4,Uid:8840a18a-f878-4077-b87c-3b4317d6d897,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:52.208947 containerd[1477]: time="2024-12-13T02:10:52.206845461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ldnns,Uid:fe0d1aa3-9d9e-446d-9089-c44a1868215f,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:10:52.218473 containerd[1477]: time="2024-12-13T02:10:52.217997249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq88m,Uid:b54c01c8-e7f0-44bb-a06d-6790acc7d0cd,Namespace:kube-system,Attempt:0,}" Dec 13 02:10:52.221201 containerd[1477]: time="2024-12-13T02:10:52.221167359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ggqmp,Uid:8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279,Namespace:calico-apiserver,Attempt:0,}" Dec 13 02:10:52.425226 containerd[1477]: time="2024-12-13T02:10:52.425033094Z" level=error msg="Failed to destroy network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.425841 containerd[1477]: time="2024-12-13T02:10:52.425660300Z" level=error msg="Failed to destroy network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.426180 containerd[1477]: time="2024-12-13T02:10:52.426154585Z" level=error msg="encountered an error cleaning up failed sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.426418 containerd[1477]: time="2024-12-13T02:10:52.426381027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ldnns,Uid:fe0d1aa3-9d9e-446d-9089-c44a1868215f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.427101 containerd[1477]: time="2024-12-13T02:10:52.426529189Z" level=error msg="encountered an error cleaning up failed sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.427101 containerd[1477]: time="2024-12-13T02:10:52.426569029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7qwdw,Uid:be50f835-bcf1-4511-9cef-2190b068c860,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.427211 kubelet[2776]: E1213 02:10:52.426761 2776 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.427211 kubelet[2776]: E1213 02:10:52.426834 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7qwdw" Dec 13 02:10:52.427211 kubelet[2776]: E1213 02:10:52.426853 2776 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-7qwdw" Dec 13 02:10:52.427309 kubelet[2776]: E1213 02:10:52.426889 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-7qwdw_kube-system(be50f835-bcf1-4511-9cef-2190b068c860)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-7qwdw_kube-system(be50f835-bcf1-4511-9cef-2190b068c860)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7qwdw" podUID="be50f835-bcf1-4511-9cef-2190b068c860" Dec 13 02:10:52.429958 kubelet[2776]: E1213 02:10:52.428368 2776 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.429958 kubelet[2776]: E1213 02:10:52.428447 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b74fb4656-ldnns" Dec 13 02:10:52.429958 kubelet[2776]: E1213 02:10:52.428466 2776 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b74fb4656-ldnns" Dec 13 02:10:52.431247 kubelet[2776]: E1213 02:10:52.428511 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b74fb4656-ldnns_calico-apiserver(fe0d1aa3-9d9e-446d-9089-c44a1868215f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b74fb4656-ldnns_calico-apiserver(fe0d1aa3-9d9e-446d-9089-c44a1868215f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b74fb4656-ldnns" podUID="fe0d1aa3-9d9e-446d-9089-c44a1868215f" Dec 13 02:10:52.447380 containerd[1477]: time="2024-12-13T02:10:52.447335270Z" level=error msg="Failed to destroy network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.448300 containerd[1477]: time="2024-12-13T02:10:52.448233199Z" level=error msg="encountered an error cleaning up failed sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.448582 containerd[1477]: time="2024-12-13T02:10:52.448556162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bbd5bc9-stdt4,Uid:8840a18a-f878-4077-b87c-3b4317d6d897,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.450100 kubelet[2776]: E1213 02:10:52.449094 2776 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.450100 kubelet[2776]: E1213 02:10:52.449158 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5bbd5bc9-stdt4" Dec 13 02:10:52.450100 kubelet[2776]: E1213 02:10:52.449178 2776 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b5bbd5bc9-stdt4" Dec 13 02:10:52.450249 kubelet[2776]: E1213 02:10:52.449217 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b5bbd5bc9-stdt4_calico-system(8840a18a-f878-4077-b87c-3b4317d6d897)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b5bbd5bc9-stdt4_calico-system(8840a18a-f878-4077-b87c-3b4317d6d897)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b5bbd5bc9-stdt4" podUID="8840a18a-f878-4077-b87c-3b4317d6d897" Dec 13 02:10:52.452554 containerd[1477]: time="2024-12-13T02:10:52.452521281Z" level=error msg="Failed to destroy network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.453740 containerd[1477]: time="2024-12-13T02:10:52.453589211Z" level=error msg="encountered an error cleaning up failed sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.453740 containerd[1477]: time="2024-12-13T02:10:52.453645892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq88m,Uid:b54c01c8-e7f0-44bb-a06d-6790acc7d0cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.454097 kubelet[2776]: E1213 02:10:52.453900 2776 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.454097 kubelet[2776]: E1213 02:10:52.453973 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bq88m" Dec 13 02:10:52.454097 kubelet[2776]: E1213 02:10:52.453999 2776 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-bq88m" Dec 13 02:10:52.454558 kubelet[2776]: E1213 02:10:52.454037 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-bq88m_kube-system(b54c01c8-e7f0-44bb-a06d-6790acc7d0cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-bq88m_kube-system(b54c01c8-e7f0-44bb-a06d-6790acc7d0cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bq88m" podUID="b54c01c8-e7f0-44bb-a06d-6790acc7d0cd" Dec 13 02:10:52.458778 containerd[1477]: time="2024-12-13T02:10:52.458660540Z" level=error msg="Failed to destroy network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.459105 containerd[1477]: time="2024-12-13T02:10:52.459079464Z" level=error msg="encountered an error cleaning up failed sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.459166 containerd[1477]: time="2024-12-13T02:10:52.459131225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ggqmp,Uid:8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.459415 kubelet[2776]: E1213 02:10:52.459367 2776 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.459597 kubelet[2776]: E1213 02:10:52.459506 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b74fb4656-ggqmp" Dec 13 02:10:52.459597 kubelet[2776]: E1213 02:10:52.459531 2776 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b74fb4656-ggqmp" Dec 13 02:10:52.459996 kubelet[2776]: E1213 02:10:52.459963 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b74fb4656-ggqmp_calico-apiserver(8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b74fb4656-ggqmp_calico-apiserver(8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b74fb4656-ggqmp" podUID="8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279" Dec 13 02:10:52.777191 containerd[1477]: time="2024-12-13T02:10:52.777023145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 02:10:52.778471 kubelet[2776]: I1213 02:10:52.777937 2776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:10:52.780967 containerd[1477]: time="2024-12-13T02:10:52.780277016Z" level=info msg="StopPodSandbox for \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\"" Dec 13 02:10:52.781553 containerd[1477]: time="2024-12-13T02:10:52.780639580Z" level=info msg="Ensure that sandbox dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209 in task-service has been cleanup successfully" Dec 13 02:10:52.785975 kubelet[2776]: I1213 02:10:52.784449 2776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:10:52.786495 containerd[1477]: time="2024-12-13T02:10:52.786452556Z" level=info msg="StopPodSandbox for \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\"" Dec 13 02:10:52.786652 containerd[1477]: time="2024-12-13T02:10:52.786632238Z" level=info msg="Ensure that sandbox d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f in task-service has been cleanup successfully" Dec 13 02:10:52.792128 kubelet[2776]: I1213 02:10:52.791206 2776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:10:52.794147 containerd[1477]: time="2024-12-13T02:10:52.794105910Z" level=info msg="StopPodSandbox for \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\"" Dec 13 02:10:52.795556 containerd[1477]: time="2024-12-13T02:10:52.795521684Z" level=info msg="Ensure that sandbox 46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9 in task-service has been cleanup successfully" Dec 13 02:10:52.798949 kubelet[2776]: I1213 02:10:52.798668 2776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:10:52.800622 containerd[1477]: time="2024-12-13T02:10:52.799923246Z" level=info msg="StopPodSandbox for \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\"" Dec 13 02:10:52.802110 containerd[1477]: time="2024-12-13T02:10:52.800907456Z" level=info msg="Ensure that sandbox 075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731 in task-service has been cleanup successfully" Dec 13 02:10:52.817292 kubelet[2776]: I1213 02:10:52.817007 2776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:10:52.821413 containerd[1477]: time="2024-12-13T02:10:52.821150972Z" level=info msg="StopPodSandbox for \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\"" Dec 13 02:10:52.821537 containerd[1477]: time="2024-12-13T02:10:52.821463975Z" level=info msg="Ensure that sandbox 05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8 in task-service has been cleanup successfully" Dec 13 02:10:52.862625 containerd[1477]: time="2024-12-13T02:10:52.862490173Z" level=error msg="StopPodSandbox for \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\" failed" error="failed to destroy network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.863109 kubelet[2776]: E1213 02:10:52.862918 2776 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:10:52.863109 kubelet[2776]: E1213 02:10:52.862974 2776 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f"} Dec 13 02:10:52.863821 kubelet[2776]: E1213 02:10:52.863045 2776 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fe0d1aa3-9d9e-446d-9089-c44a1868215f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:52.863821 kubelet[2776]: E1213 02:10:52.863773 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fe0d1aa3-9d9e-446d-9089-c44a1868215f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b74fb4656-ldnns" podUID="fe0d1aa3-9d9e-446d-9089-c44a1868215f" Dec 13 02:10:52.863963 containerd[1477]: time="2024-12-13T02:10:52.863840426Z" level=error msg="StopPodSandbox for \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\" failed" error="failed to destroy network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.864428 kubelet[2776]: E1213 02:10:52.864324 2776 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:10:52.864428 kubelet[2776]: E1213 02:10:52.864359 2776 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731"} Dec 13 02:10:52.864428 kubelet[2776]: E1213 02:10:52.864383 2776 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:52.864428 kubelet[2776]: E1213 02:10:52.864401 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b74fb4656-ggqmp" podUID="8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279" Dec 13 02:10:52.873865 containerd[1477]: time="2024-12-13T02:10:52.873686441Z" level=error msg="StopPodSandbox for \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\" failed" error="failed to destroy network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.874692 kubelet[2776]: E1213 02:10:52.874526 2776 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:10:52.874692 kubelet[2776]: E1213 02:10:52.874578 2776 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209"} Dec 13 02:10:52.874692 kubelet[2776]: E1213 02:10:52.874618 2776 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:52.874692 kubelet[2776]: E1213 02:10:52.874641 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-bq88m" podUID="b54c01c8-e7f0-44bb-a06d-6790acc7d0cd" Dec 13 02:10:52.880406 containerd[1477]: time="2024-12-13T02:10:52.880136344Z" level=error msg="StopPodSandbox for \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\" failed" error="failed to destroy network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.880838 kubelet[2776]: E1213 02:10:52.880761 2776 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:10:52.880838 kubelet[2776]: E1213 02:10:52.880814 2776 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9"} Dec 13 02:10:52.881084 kubelet[2776]: E1213 02:10:52.880847 2776 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8840a18a-f878-4077-b87c-3b4317d6d897\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:52.881084 kubelet[2776]: E1213 02:10:52.880868 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8840a18a-f878-4077-b87c-3b4317d6d897\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b5bbd5bc9-stdt4" podUID="8840a18a-f878-4077-b87c-3b4317d6d897" Dec 13 02:10:52.887270 containerd[1477]: time="2024-12-13T02:10:52.887119131Z" level=error msg="StopPodSandbox for \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\" failed" error="failed to destroy network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:52.887775 kubelet[2776]: E1213 02:10:52.887481 2776 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:10:52.887775 kubelet[2776]: E1213 02:10:52.887535 2776 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8"} Dec 13 02:10:52.887775 kubelet[2776]: E1213 02:10:52.887569 2776 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be50f835-bcf1-4511-9cef-2190b068c860\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:52.887775 kubelet[2776]: E1213 02:10:52.887590 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be50f835-bcf1-4511-9cef-2190b068c860\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-7qwdw" podUID="be50f835-bcf1-4511-9cef-2190b068c860" Dec 13 02:10:53.125011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f-shm.mount: Deactivated successfully. Dec 13 02:10:53.125195 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8-shm.mount: Deactivated successfully. Dec 13 02:10:53.634734 systemd[1]: Created slice kubepods-besteffort-pod883deafc_f1bd_4933_895f_1acfab27941b.slice - libcontainer container kubepods-besteffort-pod883deafc_f1bd_4933_895f_1acfab27941b.slice. Dec 13 02:10:53.637812 containerd[1477]: time="2024-12-13T02:10:53.637756511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz9nk,Uid:883deafc-f1bd-4933-895f-1acfab27941b,Namespace:calico-system,Attempt:0,}" Dec 13 02:10:53.708205 containerd[1477]: time="2024-12-13T02:10:53.708151903Z" level=error msg="Failed to destroy network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:53.708746 containerd[1477]: time="2024-12-13T02:10:53.708493587Z" level=error msg="encountered an error cleaning up failed sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:53.708746 containerd[1477]: time="2024-12-13T02:10:53.708546987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz9nk,Uid:883deafc-f1bd-4933-895f-1acfab27941b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:53.710410 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585-shm.mount: Deactivated successfully. Dec 13 02:10:53.711618 kubelet[2776]: E1213 02:10:53.711219 2776 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:53.711618 kubelet[2776]: E1213 02:10:53.711275 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tz9nk" Dec 13 02:10:53.711618 kubelet[2776]: E1213 02:10:53.711302 2776 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tz9nk" Dec 13 02:10:53.711779 kubelet[2776]: E1213 02:10:53.711368 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tz9nk_calico-system(883deafc-f1bd-4933-895f-1acfab27941b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tz9nk_calico-system(883deafc-f1bd-4933-895f-1acfab27941b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:53.821607 kubelet[2776]: I1213 02:10:53.821557 2776 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:10:53.823609 containerd[1477]: time="2024-12-13T02:10:53.822466474Z" level=info msg="StopPodSandbox for \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\"" Dec 13 02:10:53.823609 containerd[1477]: time="2024-12-13T02:10:53.822770557Z" level=info msg="Ensure that sandbox 5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585 in task-service has been cleanup successfully" Dec 13 02:10:53.854574 containerd[1477]: time="2024-12-13T02:10:53.854130856Z" level=error msg="StopPodSandbox for \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\" failed" error="failed to destroy network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 02:10:53.854710 kubelet[2776]: E1213 02:10:53.854419 2776 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:10:53.854710 kubelet[2776]: E1213 02:10:53.854479 2776 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585"} Dec 13 02:10:53.854710 kubelet[2776]: E1213 02:10:53.854510 2776 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"883deafc-f1bd-4933-895f-1acfab27941b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 02:10:53.854710 kubelet[2776]: E1213 02:10:53.854536 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"883deafc-f1bd-4933-895f-1acfab27941b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tz9nk" podUID="883deafc-f1bd-4933-895f-1acfab27941b" Dec 13 02:10:54.901781 update_engine[1455]: I20241213 02:10:54.901654 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:54.902347 update_engine[1455]: I20241213 02:10:54.901991 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:54.902347 update_engine[1455]: I20241213 02:10:54.902284 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:54.903475 update_engine[1455]: E20241213 02:10:54.903375 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:54.903686 update_engine[1455]: I20241213 02:10:54.903496 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:10:54.903686 update_engine[1455]: I20241213 02:10:54.903518 1455 omaha_request_action.cc:617] Omaha request response: Dec 13 02:10:54.903686 update_engine[1455]: E20241213 02:10:54.903636 1455 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 02:10:54.903686 update_engine[1455]: I20241213 02:10:54.903662 1455 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 02:10:54.903686 update_engine[1455]: I20241213 02:10:54.903676 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:10:54.903686 update_engine[1455]: I20241213 02:10:54.903687 1455 update_attempter.cc:306] Processing Done. Dec 13 02:10:54.904068 update_engine[1455]: E20241213 02:10:54.903709 1455 update_attempter.cc:619] Update failed. Dec 13 02:10:54.904068 update_engine[1455]: I20241213 02:10:54.903721 1455 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 02:10:54.904068 update_engine[1455]: I20241213 02:10:54.903731 1455 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 02:10:54.904068 update_engine[1455]: I20241213 02:10:54.903743 1455 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 02:10:54.904068 update_engine[1455]: I20241213 02:10:54.903851 1455 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:10:54.904068 update_engine[1455]: I20241213 02:10:54.903887 1455 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 02:10:54.904068 update_engine[1455]: I20241213 02:10:54.903898 1455 omaha_request_action.cc:272] Request: Dec 13 02:10:54.904068 update_engine[1455]: Dec 13 02:10:54.904068 update_engine[1455]: Dec 13 02:10:54.904068 update_engine[1455]: Dec 13 02:10:54.904068 update_engine[1455]: Dec 13 02:10:54.904068 update_engine[1455]: Dec 13 02:10:54.904068 update_engine[1455]: Dec 13 02:10:54.904068 update_engine[1455]: I20241213 02:10:54.903910 1455 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:10:54.904731 update_engine[1455]: I20241213 02:10:54.904186 1455 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:10:54.904731 update_engine[1455]: I20241213 02:10:54.904441 1455 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:10:54.905039 locksmithd[1496]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 02:10:54.905624 update_engine[1455]: E20241213 02:10:54.905560 1455 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:10:54.905696 update_engine[1455]: I20241213 02:10:54.905652 1455 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 02:10:54.905755 update_engine[1455]: I20241213 02:10:54.905719 1455 omaha_request_action.cc:617] Omaha request response: Dec 13 02:10:54.905755 update_engine[1455]: I20241213 02:10:54.905742 1455 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:10:54.905867 update_engine[1455]: I20241213 02:10:54.905754 1455 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 02:10:54.905867 update_engine[1455]: I20241213 02:10:54.905766 1455 update_attempter.cc:306] Processing Done. Dec 13 02:10:54.905867 update_engine[1455]: I20241213 02:10:54.905777 1455 update_attempter.cc:310] Error event sent. Dec 13 02:10:54.905867 update_engine[1455]: I20241213 02:10:54.905793 1455 update_check_scheduler.cc:74] Next update check in 45m17s Dec 13 02:10:54.906258 locksmithd[1496]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 02:10:59.371404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048803066.mount: Deactivated successfully. Dec 13 02:10:59.409139 containerd[1477]: time="2024-12-13T02:10:59.408234326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:59.409769 containerd[1477]: time="2024-12-13T02:10:59.409630098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 02:10:59.410658 containerd[1477]: time="2024-12-13T02:10:59.410612187Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:59.418969 containerd[1477]: time="2024-12-13T02:10:59.418896980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:10:59.421536 containerd[1477]: time="2024-12-13T02:10:59.421478962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.644389657s" Dec 13 02:10:59.421791 containerd[1477]: time="2024-12-13T02:10:59.421744645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 02:10:59.441045 containerd[1477]: time="2024-12-13T02:10:59.440973933Z" level=info msg="CreateContainer within sandbox \"f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 02:10:59.470606 containerd[1477]: time="2024-12-13T02:10:59.470505352Z" level=info msg="CreateContainer within sandbox \"f40829b21193c3f4b7e711e7864e3eaf769d1abb5d235277b404c8559c06223e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7189fd26dbb57656f7859b2b1d8ded98f20ec0e6477304b1401dbcd5490f1532\"" Dec 13 02:10:59.471461 containerd[1477]: time="2024-12-13T02:10:59.471425680Z" level=info msg="StartContainer for \"7189fd26dbb57656f7859b2b1d8ded98f20ec0e6477304b1401dbcd5490f1532\"" Dec 13 02:10:59.512253 systemd[1]: Started cri-containerd-7189fd26dbb57656f7859b2b1d8ded98f20ec0e6477304b1401dbcd5490f1532.scope - libcontainer container 7189fd26dbb57656f7859b2b1d8ded98f20ec0e6477304b1401dbcd5490f1532. Dec 13 02:10:59.548998 containerd[1477]: time="2024-12-13T02:10:59.548879319Z" level=info msg="StartContainer for \"7189fd26dbb57656f7859b2b1d8ded98f20ec0e6477304b1401dbcd5490f1532\" returns successfully" Dec 13 02:10:59.713386 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 02:10:59.713500 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 02:10:59.872172 kubelet[2776]: I1213 02:10:59.872092 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7z8r4" podStartSLOduration=1.189486907 podStartE2EDuration="18.870132535s" podCreationTimestamp="2024-12-13 02:10:41 +0000 UTC" firstStartedPulling="2024-12-13 02:10:41.742938113 +0000 UTC m=+24.221979691" lastFinishedPulling="2024-12-13 02:10:59.423583741 +0000 UTC m=+41.902625319" observedRunningTime="2024-12-13 02:10:59.869423928 +0000 UTC m=+42.348465586" watchObservedRunningTime="2024-12-13 02:10:59.870132535 +0000 UTC m=+42.349174153" Dec 13 02:11:00.849160 kubelet[2776]: I1213 02:11:00.849114 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:11:03.076529 kubelet[2776]: I1213 02:11:03.076423 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:11:03.106019 systemd[1]: run-containerd-runc-k8s.io-7189fd26dbb57656f7859b2b1d8ded98f20ec0e6477304b1401dbcd5490f1532-runc.wXe5tj.mount: Deactivated successfully. Dec 13 02:11:04.627134 containerd[1477]: time="2024-12-13T02:11:04.626775401Z" level=info msg="StopPodSandbox for \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\"" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.708 [INFO][4108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.710 [INFO][4108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" iface="eth0" netns="/var/run/netns/cni-214f8b4e-c96f-eb55-6e9f-43cf0a09db35" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.710 [INFO][4108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" iface="eth0" netns="/var/run/netns/cni-214f8b4e-c96f-eb55-6e9f-43cf0a09db35" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.710 [INFO][4108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" iface="eth0" netns="/var/run/netns/cni-214f8b4e-c96f-eb55-6e9f-43cf0a09db35" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.710 [INFO][4108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.711 [INFO][4108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.774 [INFO][4114] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.774 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.774 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.790 [WARNING][4114] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.790 [INFO][4114] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.792 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:04.798195 containerd[1477]: 2024-12-13 02:11:04.796 [INFO][4108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:04.800919 systemd[1]: run-netns-cni\x2d214f8b4e\x2dc96f\x2deb55\x2d6e9f\x2d43cf0a09db35.mount: Deactivated successfully. Dec 13 02:11:04.801749 containerd[1477]: time="2024-12-13T02:11:04.800995473Z" level=info msg="TearDown network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\" successfully" Dec 13 02:11:04.801749 containerd[1477]: time="2024-12-13T02:11:04.801046474Z" level=info msg="StopPodSandbox for \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\" returns successfully" Dec 13 02:11:04.806252 containerd[1477]: time="2024-12-13T02:11:04.805822713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ldnns,Uid:fe0d1aa3-9d9e-446d-9089-c44a1868215f,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:11:04.956186 systemd-networkd[1361]: califf8144c80f3: Link UP Dec 13 02:11:04.957751 systemd-networkd[1361]: califf8144c80f3: Gained carrier Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.842 [INFO][4123] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.860 [INFO][4123] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0 calico-apiserver-5b74fb4656- calico-apiserver fe0d1aa3-9d9e-446d-9089-c44a1868215f 740 0 2024-12-13 02:10:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b74fb4656 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-6-b597ddf835 calico-apiserver-5b74fb4656-ldnns eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf8144c80f3 [] []}} ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.860 [INFO][4123] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.898 [INFO][4134] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" HandleID="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.913 [INFO][4134] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" HandleID="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d7da0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-6-b597ddf835", "pod":"calico-apiserver-5b74fb4656-ldnns", "timestamp":"2024-12-13 02:11:04.898382354 +0000 UTC"}, Hostname:"ci-4081-2-1-6-b597ddf835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.913 [INFO][4134] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.913 [INFO][4134] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.913 [INFO][4134] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-b597ddf835' Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.916 [INFO][4134] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.921 [INFO][4134] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.927 [INFO][4134] ipam/ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.929 [INFO][4134] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.931 [INFO][4134] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.931 [INFO][4134] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.933 [INFO][4134] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758 Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.938 [INFO][4134] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.944 [INFO][4134] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.65/26] block=192.168.63.64/26 handle="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.944 [INFO][4134] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.65/26] handle="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.945 [INFO][4134] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:04.984605 containerd[1477]: 2024-12-13 02:11:04.945 [INFO][4134] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.65/26] IPv6=[] ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" HandleID="k8s-pod-network.021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.985503 containerd[1477]: 2024-12-13 02:11:04.946 [INFO][4123] cni-plugin/k8s.go 386: Populated endpoint ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe0d1aa3-9d9e-446d-9089-c44a1868215f", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"", Pod:"calico-apiserver-5b74fb4656-ldnns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8144c80f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:04.985503 containerd[1477]: 2024-12-13 02:11:04.947 [INFO][4123] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.65/32] ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.985503 containerd[1477]: 2024-12-13 02:11:04.947 [INFO][4123] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf8144c80f3 ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.985503 containerd[1477]: 2024-12-13 02:11:04.957 [INFO][4123] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:04.985503 containerd[1477]: 2024-12-13 02:11:04.958 [INFO][4123] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe0d1aa3-9d9e-446d-9089-c44a1868215f", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758", Pod:"calico-apiserver-5b74fb4656-ldnns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8144c80f3", MAC:"5a:07:83:bb:83:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:04.985503 containerd[1477]: 2024-12-13 02:11:04.976 [INFO][4123] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ldnns" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:05.006866 containerd[1477]: time="2024-12-13T02:11:05.006239600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:11:05.006866 containerd[1477]: time="2024-12-13T02:11:05.006302240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:11:05.006866 containerd[1477]: time="2024-12-13T02:11:05.006317641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:05.006866 containerd[1477]: time="2024-12-13T02:11:05.006401641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:05.026169 systemd[1]: run-containerd-runc-k8s.io-021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758-runc.Uy4j2K.mount: Deactivated successfully. Dec 13 02:11:05.033236 systemd[1]: Started cri-containerd-021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758.scope - libcontainer container 021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758. Dec 13 02:11:05.075860 containerd[1477]: time="2024-12-13T02:11:05.075792285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ldnns,Uid:fe0d1aa3-9d9e-446d-9089-c44a1868215f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758\"" Dec 13 02:11:05.079163 containerd[1477]: time="2024-12-13T02:11:05.079112552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:11:05.637278 containerd[1477]: time="2024-12-13T02:11:05.637229284Z" level=info msg="StopPodSandbox for \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\"" Dec 13 02:11:05.638879 containerd[1477]: time="2024-12-13T02:11:05.637659768Z" level=info msg="StopPodSandbox for \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\"" Dec 13 02:11:05.651590 containerd[1477]: time="2024-12-13T02:11:05.651530640Z" level=info msg="StopPodSandbox for \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\"" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.728 [INFO][4252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.730 [INFO][4252] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" iface="eth0" netns="/var/run/netns/cni-56bd0355-1bb7-2d29-bcc9-6505c6a1a322" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.732 [INFO][4252] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" iface="eth0" netns="/var/run/netns/cni-56bd0355-1bb7-2d29-bcc9-6505c6a1a322" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.732 [INFO][4252] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" iface="eth0" netns="/var/run/netns/cni-56bd0355-1bb7-2d29-bcc9-6505c6a1a322" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.732 [INFO][4252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.732 [INFO][4252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.762 [INFO][4275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.762 [INFO][4275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.762 [INFO][4275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.777 [WARNING][4275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.777 [INFO][4275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.780 [INFO][4275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:05.785268 containerd[1477]: 2024-12-13 02:11:05.784 [INFO][4252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:05.786251 containerd[1477]: time="2024-12-13T02:11:05.786049373Z" level=info msg="TearDown network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\" successfully" Dec 13 02:11:05.786251 containerd[1477]: time="2024-12-13T02:11:05.786208374Z" level=info msg="StopPodSandbox for \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\" returns successfully" Dec 13 02:11:05.787211 containerd[1477]: time="2024-12-13T02:11:05.787040941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bbd5bc9-stdt4,Uid:8840a18a-f878-4077-b87c-3b4317d6d897,Namespace:calico-system,Attempt:1,}" Dec 13 02:11:05.801936 systemd[1]: run-netns-cni\x2d56bd0355\x2d1bb7\x2d2d29\x2dbcc9\x2d6505c6a1a322.mount: Deactivated successfully. Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.735 [INFO][4260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.735 [INFO][4260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" iface="eth0" netns="/var/run/netns/cni-e5f63c9e-8e45-1a99-244f-9f70f173a1d5" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.735 [INFO][4260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" iface="eth0" netns="/var/run/netns/cni-e5f63c9e-8e45-1a99-244f-9f70f173a1d5" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.736 [INFO][4260] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" iface="eth0" netns="/var/run/netns/cni-e5f63c9e-8e45-1a99-244f-9f70f173a1d5" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.736 [INFO][4260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.736 [INFO][4260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.779 [INFO][4278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.779 [INFO][4278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.781 [INFO][4278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.793 [WARNING][4278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.793 [INFO][4278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.799 [INFO][4278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:05.814590 containerd[1477]: 2024-12-13 02:11:05.808 [INFO][4260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:05.815518 containerd[1477]: time="2024-12-13T02:11:05.815420571Z" level=info msg="TearDown network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\" successfully" Dec 13 02:11:05.815518 containerd[1477]: time="2024-12-13T02:11:05.815453412Z" level=info msg="StopPodSandbox for \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\" returns successfully" Dec 13 02:11:05.818150 containerd[1477]: time="2024-12-13T02:11:05.816825823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ggqmp,Uid:8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279,Namespace:calico-apiserver,Attempt:1,}" Dec 13 02:11:05.819206 systemd[1]: run-netns-cni\x2de5f63c9e\x2d8e45\x2d1a99\x2d244f\x2d9f70f173a1d5.mount: Deactivated successfully. Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.760 [INFO][4264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.761 [INFO][4264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" iface="eth0" netns="/var/run/netns/cni-efddcc63-c2cd-248f-e313-2eb465040798" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.761 [INFO][4264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" iface="eth0" netns="/var/run/netns/cni-efddcc63-c2cd-248f-e313-2eb465040798" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.762 [INFO][4264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" iface="eth0" netns="/var/run/netns/cni-efddcc63-c2cd-248f-e313-2eb465040798" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.762 [INFO][4264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.762 [INFO][4264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.826 [INFO][4287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.826 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.826 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.845 [WARNING][4287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.846 [INFO][4287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.850 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:05.855601 containerd[1477]: 2024-12-13 02:11:05.852 [INFO][4264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:05.857439 containerd[1477]: time="2024-12-13T02:11:05.857257071Z" level=info msg="TearDown network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\" successfully" Dec 13 02:11:05.857439 containerd[1477]: time="2024-12-13T02:11:05.857299552Z" level=info msg="StopPodSandbox for \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\" returns successfully" Dec 13 02:11:05.859108 containerd[1477]: time="2024-12-13T02:11:05.859002645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz9nk,Uid:883deafc-f1bd-4933-895f-1acfab27941b,Namespace:calico-system,Attempt:1,}" Dec 13 02:11:06.023894 systemd-networkd[1361]: calid89d505db14: Link UP Dec 13 02:11:06.025549 systemd-networkd[1361]: calid89d505db14: Gained carrier Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.864 [INFO][4294] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.891 [INFO][4294] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0 calico-kube-controllers-7b5bbd5bc9- calico-system 8840a18a-f878-4077-b87c-3b4317d6d897 749 0 2024-12-13 02:10:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b5bbd5bc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-6-b597ddf835 calico-kube-controllers-7b5bbd5bc9-stdt4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid89d505db14 [] []}} ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.891 [INFO][4294] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.942 [INFO][4329] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" HandleID="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.961 [INFO][4329] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" HandleID="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebac0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-6-b597ddf835", "pod":"calico-kube-controllers-7b5bbd5bc9-stdt4", "timestamp":"2024-12-13 02:11:05.942616924 +0000 UTC"}, Hostname:"ci-4081-2-1-6-b597ddf835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.961 [INFO][4329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.961 [INFO][4329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.962 [INFO][4329] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-b597ddf835' Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.964 [INFO][4329] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.971 [INFO][4329] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.980 [INFO][4329] ipam/ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.983 [INFO][4329] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.989 [INFO][4329] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.989 [INFO][4329] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:05.991 [INFO][4329] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41 Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:06.005 [INFO][4329] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:06.014 [INFO][4329] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.66/26] block=192.168.63.64/26 handle="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:06.014 [INFO][4329] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.66/26] handle="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:06.014 [INFO][4329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:06.056966 containerd[1477]: 2024-12-13 02:11:06.014 [INFO][4329] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.66/26] IPv6=[] ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" HandleID="k8s-pod-network.a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:06.058428 containerd[1477]: 2024-12-13 02:11:06.018 [INFO][4294] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0", GenerateName:"calico-kube-controllers-7b5bbd5bc9-", Namespace:"calico-system", SelfLink:"", UID:"8840a18a-f878-4077-b87c-3b4317d6d897", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5bbd5bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"", Pod:"calico-kube-controllers-7b5bbd5bc9-stdt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid89d505db14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.058428 containerd[1477]: 2024-12-13 02:11:06.018 [INFO][4294] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.66/32] ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:06.058428 containerd[1477]: 2024-12-13 02:11:06.018 [INFO][4294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid89d505db14 ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:06.058428 containerd[1477]: 2024-12-13 02:11:06.026 [INFO][4294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:06.058428 containerd[1477]: 2024-12-13 02:11:06.031 [INFO][4294] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0", GenerateName:"calico-kube-controllers-7b5bbd5bc9-", Namespace:"calico-system", SelfLink:"", UID:"8840a18a-f878-4077-b87c-3b4317d6d897", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5bbd5bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41", Pod:"calico-kube-controllers-7b5bbd5bc9-stdt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid89d505db14", MAC:"fe:e3:cc:e6:e1:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.058428 containerd[1477]: 2024-12-13 02:11:06.053 [INFO][4294] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41" Namespace="calico-system" Pod="calico-kube-controllers-7b5bbd5bc9-stdt4" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:06.097779 containerd[1477]: time="2024-12-13T02:11:06.097113130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:11:06.098785 containerd[1477]: time="2024-12-13T02:11:06.097720135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:11:06.098785 containerd[1477]: time="2024-12-13T02:11:06.097804815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.100152 systemd-networkd[1361]: calidddb02f40d6: Link UP Dec 13 02:11:06.101437 systemd-networkd[1361]: calidddb02f40d6: Gained carrier Dec 13 02:11:06.120839 containerd[1477]: time="2024-12-13T02:11:06.098047017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:05.890 [INFO][4304] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:05.912 [INFO][4304] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0 calico-apiserver-5b74fb4656- calico-apiserver 8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279 750 0 2024-12-13 02:10:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b74fb4656 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-6-b597ddf835 calico-apiserver-5b74fb4656-ggqmp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidddb02f40d6 [] []}} ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:05.912 [INFO][4304] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:05.970 [INFO][4335] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" HandleID="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:05.994 [INFO][4335] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" HandleID="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b8bb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-6-b597ddf835", "pod":"calico-apiserver-5b74fb4656-ggqmp", "timestamp":"2024-12-13 02:11:05.970110028 +0000 UTC"}, Hostname:"ci-4081-2-1-6-b597ddf835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:05.994 [INFO][4335] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.014 [INFO][4335] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.014 [INFO][4335] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-b597ddf835' Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.017 [INFO][4335] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.028 [INFO][4335] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.049 [INFO][4335] ipam/ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.056 [INFO][4335] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.061 [INFO][4335] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.062 [INFO][4335] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.067 [INFO][4335] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607 Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.077 [INFO][4335] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.086 [INFO][4335] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.67/26] block=192.168.63.64/26 handle="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.087 [INFO][4335] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.67/26] handle="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.087 [INFO][4335] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:06.127085 containerd[1477]: 2024-12-13 02:11:06.088 [INFO][4335] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.67/26] IPv6=[] ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" HandleID="k8s-pod-network.300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:06.127697 containerd[1477]: 2024-12-13 02:11:06.091 [INFO][4304] cni-plugin/k8s.go 386: Populated endpoint ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"", Pod:"calico-apiserver-5b74fb4656-ggqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidddb02f40d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.127697 containerd[1477]: 2024-12-13 02:11:06.092 [INFO][4304] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.67/32] ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:06.127697 containerd[1477]: 2024-12-13 02:11:06.092 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidddb02f40d6 ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:06.127697 containerd[1477]: 2024-12-13 02:11:06.102 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:06.127697 containerd[1477]: 2024-12-13 02:11:06.102 [INFO][4304] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607", Pod:"calico-apiserver-5b74fb4656-ggqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidddb02f40d6", MAC:"a2:a8:ba:f2:dc:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.127697 containerd[1477]: 2024-12-13 02:11:06.122 [INFO][4304] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607" Namespace="calico-apiserver" Pod="calico-apiserver-5b74fb4656-ggqmp" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:06.162291 systemd[1]: Started cri-containerd-a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41.scope - libcontainer container a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41. Dec 13 02:11:06.171426 systemd-networkd[1361]: caliad7bfaf66d6: Link UP Dec 13 02:11:06.171678 systemd-networkd[1361]: caliad7bfaf66d6: Gained carrier Dec 13 02:11:06.183255 containerd[1477]: time="2024-12-13T02:11:06.182968539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:11:06.183943 containerd[1477]: time="2024-12-13T02:11:06.183882946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:11:06.184659 containerd[1477]: time="2024-12-13T02:11:06.184438111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.186754 containerd[1477]: time="2024-12-13T02:11:06.186532288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:05.912 [INFO][4316] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:05.938 [INFO][4316] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0 csi-node-driver- calico-system 883deafc-f1bd-4933-895f-1acfab27941b 752 0 2024-12-13 02:10:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-6-b597ddf835 csi-node-driver-tz9nk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliad7bfaf66d6 [] []}} ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:05.938 [INFO][4316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.021 [INFO][4340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" HandleID="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.048 [INFO][4340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" HandleID="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316af0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-6-b597ddf835", "pod":"csi-node-driver-tz9nk", "timestamp":"2024-12-13 02:11:06.021761925 +0000 UTC"}, Hostname:"ci-4081-2-1-6-b597ddf835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.048 [INFO][4340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.087 [INFO][4340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.087 [INFO][4340] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-b597ddf835' Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.092 [INFO][4340] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.106 [INFO][4340] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.120 [INFO][4340] ipam/ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.125 [INFO][4340] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.130 [INFO][4340] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.131 [INFO][4340] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.136 [INFO][4340] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165 Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.143 [INFO][4340] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.154 [INFO][4340] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.68/26] block=192.168.63.64/26 handle="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.154 [INFO][4340] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.68/26] handle="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.154 [INFO][4340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:06.196788 containerd[1477]: 2024-12-13 02:11:06.154 [INFO][4340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.68/26] IPv6=[] ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" HandleID="k8s-pod-network.62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:06.197838 containerd[1477]: 2024-12-13 02:11:06.159 [INFO][4316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"883deafc-f1bd-4933-895f-1acfab27941b", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"", Pod:"csi-node-driver-tz9nk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad7bfaf66d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.197838 containerd[1477]: 2024-12-13 02:11:06.159 [INFO][4316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.68/32] ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:06.197838 containerd[1477]: 2024-12-13 02:11:06.160 [INFO][4316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad7bfaf66d6 ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:06.197838 containerd[1477]: 2024-12-13 02:11:06.171 [INFO][4316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:06.197838 containerd[1477]: 2024-12-13 02:11:06.173 [INFO][4316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"883deafc-f1bd-4933-895f-1acfab27941b", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165", Pod:"csi-node-driver-tz9nk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad7bfaf66d6", MAC:"92:b3:ff:02:b5:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.197838 containerd[1477]: 2024-12-13 02:11:06.193 [INFO][4316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165" Namespace="calico-system" Pod="csi-node-driver-tz9nk" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:06.220281 systemd[1]: Started cri-containerd-300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607.scope - libcontainer container 300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607. Dec 13 02:11:06.238934 containerd[1477]: time="2024-12-13T02:11:06.238832227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:11:06.239347 containerd[1477]: time="2024-12-13T02:11:06.239122590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:11:06.239347 containerd[1477]: time="2024-12-13T02:11:06.239163310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.239585 containerd[1477]: time="2024-12-13T02:11:06.239473872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.255315 containerd[1477]: time="2024-12-13T02:11:06.255232879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b5bbd5bc9-stdt4,Uid:8840a18a-f878-4077-b87c-3b4317d6d897,Namespace:calico-system,Attempt:1,} returns sandbox id \"a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41\"" Dec 13 02:11:06.280111 systemd[1]: Started cri-containerd-62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165.scope - libcontainer container 62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165. Dec 13 02:11:06.287167 containerd[1477]: time="2024-12-13T02:11:06.286230168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b74fb4656-ggqmp,Uid:8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607\"" Dec 13 02:11:06.307703 containerd[1477]: time="2024-12-13T02:11:06.307595259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tz9nk,Uid:883deafc-f1bd-4933-895f-1acfab27941b,Namespace:calico-system,Attempt:1,} returns sandbox id \"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165\"" Dec 13 02:11:06.621392 systemd-networkd[1361]: califf8144c80f3: Gained IPv6LL Dec 13 02:11:06.627643 containerd[1477]: time="2024-12-13T02:11:06.627131304Z" level=info msg="StopPodSandbox for \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\"" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.695 [INFO][4525] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.695 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" iface="eth0" netns="/var/run/netns/cni-f87fdee1-a3a8-163a-ff9b-e96d94cc0cf9" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.696 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" iface="eth0" netns="/var/run/netns/cni-f87fdee1-a3a8-163a-ff9b-e96d94cc0cf9" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.697 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" iface="eth0" netns="/var/run/netns/cni-f87fdee1-a3a8-163a-ff9b-e96d94cc0cf9" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.697 [INFO][4525] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.697 [INFO][4525] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.728 [INFO][4542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.728 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.728 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.740 [WARNING][4542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.740 [INFO][4542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.742 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:06.745764 containerd[1477]: 2024-12-13 02:11:06.744 [INFO][4525] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:06.747352 containerd[1477]: time="2024-12-13T02:11:06.747182867Z" level=info msg="TearDown network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\" successfully" Dec 13 02:11:06.747352 containerd[1477]: time="2024-12-13T02:11:06.747219867Z" level=info msg="StopPodSandbox for \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\" returns successfully" Dec 13 02:11:06.748325 containerd[1477]: time="2024-12-13T02:11:06.748259156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq88m,Uid:b54c01c8-e7f0-44bb-a06d-6790acc7d0cd,Namespace:kube-system,Attempt:1,}" Dec 13 02:11:06.813759 systemd[1]: run-netns-cni\x2defddcc63\x2dc2cd\x2d248f\x2de313\x2d2eb465040798.mount: Deactivated successfully. Dec 13 02:11:06.813970 systemd[1]: run-netns-cni\x2df87fdee1\x2da3a8\x2d163a\x2dff9b\x2de96d94cc0cf9.mount: Deactivated successfully. Dec 13 02:11:06.910875 systemd-networkd[1361]: cali841130e7ce4: Link UP Dec 13 02:11:06.911016 systemd-networkd[1361]: cali841130e7ce4: Gained carrier Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.786 [INFO][4550] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.812 [INFO][4550] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0 coredns-7db6d8ff4d- kube-system b54c01c8-e7f0-44bb-a06d-6790acc7d0cd 769 0 2024-12-13 02:10:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-6-b597ddf835 coredns-7db6d8ff4d-bq88m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali841130e7ce4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.813 [INFO][4550] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.847 [INFO][4562] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" HandleID="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.864 [INFO][4562] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" HandleID="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030d600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-6-b597ddf835", "pod":"coredns-7db6d8ff4d-bq88m", "timestamp":"2024-12-13 02:11:06.847543432 +0000 UTC"}, Hostname:"ci-4081-2-1-6-b597ddf835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.865 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.865 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.865 [INFO][4562] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-b597ddf835' Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.868 [INFO][4562] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.876 [INFO][4562] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.883 [INFO][4562] ipam/ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.886 [INFO][4562] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.889 [INFO][4562] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.889 [INFO][4562] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.891 [INFO][4562] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.896 [INFO][4562] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.904 [INFO][4562] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.69/26] block=192.168.63.64/26 handle="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.904 [INFO][4562] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.69/26] handle="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.904 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:06.931104 containerd[1477]: 2024-12-13 02:11:06.904 [INFO][4562] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.69/26] IPv6=[] ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" HandleID="k8s-pod-network.5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.931717 containerd[1477]: 2024-12-13 02:11:06.906 [INFO][4550] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"", Pod:"coredns-7db6d8ff4d-bq88m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali841130e7ce4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.931717 containerd[1477]: 2024-12-13 02:11:06.906 [INFO][4550] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.69/32] ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.931717 containerd[1477]: 2024-12-13 02:11:06.906 [INFO][4550] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali841130e7ce4 ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.931717 containerd[1477]: 2024-12-13 02:11:06.910 [INFO][4550] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.931717 containerd[1477]: 2024-12-13 02:11:06.911 [INFO][4550] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db", Pod:"coredns-7db6d8ff4d-bq88m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali841130e7ce4", MAC:"ee:d6:52:63:6a:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:06.931717 containerd[1477]: 2024-12-13 02:11:06.924 [INFO][4550] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db" Namespace="kube-system" Pod="coredns-7db6d8ff4d-bq88m" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:06.953520 containerd[1477]: time="2024-12-13T02:11:06.953328561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:11:06.953999 containerd[1477]: time="2024-12-13T02:11:06.953945486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:11:06.953999 containerd[1477]: time="2024-12-13T02:11:06.953969287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.954340 containerd[1477]: time="2024-12-13T02:11:06.954222729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:06.984251 systemd[1]: Started cri-containerd-5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db.scope - libcontainer container 5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db. Dec 13 02:11:07.021086 containerd[1477]: time="2024-12-13T02:11:07.020961142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-bq88m,Uid:b54c01c8-e7f0-44bb-a06d-6790acc7d0cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db\"" Dec 13 02:11:07.034800 containerd[1477]: time="2024-12-13T02:11:07.034728932Z" level=info msg="CreateContainer within sandbox \"5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:11:07.050023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152224034.mount: Deactivated successfully. Dec 13 02:11:07.053264 containerd[1477]: time="2024-12-13T02:11:07.053172718Z" level=info msg="CreateContainer within sandbox \"5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bcf78e5a3ff5a461db7f550082bf780932f68efe1e85e87a537e86c939104f66\"" Dec 13 02:11:07.055183 containerd[1477]: time="2024-12-13T02:11:07.054188366Z" level=info msg="StartContainer for \"bcf78e5a3ff5a461db7f550082bf780932f68efe1e85e87a537e86c939104f66\"" Dec 13 02:11:07.084262 systemd[1]: Started cri-containerd-bcf78e5a3ff5a461db7f550082bf780932f68efe1e85e87a537e86c939104f66.scope - libcontainer container bcf78e5a3ff5a461db7f550082bf780932f68efe1e85e87a537e86c939104f66. Dec 13 02:11:07.115724 containerd[1477]: time="2024-12-13T02:11:07.115685574Z" level=info msg="StartContainer for \"bcf78e5a3ff5a461db7f550082bf780932f68efe1e85e87a537e86c939104f66\" returns successfully" Dec 13 02:11:07.261197 systemd-networkd[1361]: calidddb02f40d6: Gained IPv6LL Dec 13 02:11:07.634916 containerd[1477]: time="2024-12-13T02:11:07.634531289Z" level=info msg="StopPodSandbox for \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\"" Dec 13 02:11:07.648484 systemd-networkd[1361]: calid89d505db14: Gained IPv6LL Dec 13 02:11:07.837862 systemd-networkd[1361]: caliad7bfaf66d6: Gained IPv6LL Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.719 [INFO][4677] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.720 [INFO][4677] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" iface="eth0" netns="/var/run/netns/cni-3484a33e-7676-544c-30bb-25e6fd5bc180" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.722 [INFO][4677] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" iface="eth0" netns="/var/run/netns/cni-3484a33e-7676-544c-30bb-25e6fd5bc180" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.722 [INFO][4677] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" iface="eth0" netns="/var/run/netns/cni-3484a33e-7676-544c-30bb-25e6fd5bc180" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.722 [INFO][4677] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.722 [INFO][4677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.825 [INFO][4685] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.825 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.825 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.843 [WARNING][4685] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.844 [INFO][4685] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.846 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:07.853808 containerd[1477]: 2024-12-13 02:11:07.849 [INFO][4677] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:07.853808 containerd[1477]: time="2024-12-13T02:11:07.853684828Z" level=info msg="TearDown network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\" successfully" Dec 13 02:11:07.853808 containerd[1477]: time="2024-12-13T02:11:07.853712828Z" level=info msg="StopPodSandbox for \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\" returns successfully" Dec 13 02:11:07.857912 systemd[1]: run-netns-cni\x2d3484a33e\x2d7676\x2d544c\x2d30bb\x2d25e6fd5bc180.mount: Deactivated successfully. Dec 13 02:11:07.859953 containerd[1477]: time="2024-12-13T02:11:07.859909757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7qwdw,Uid:be50f835-bcf1-4511-9cef-2190b068c860,Namespace:kube-system,Attempt:1,}" Dec 13 02:11:07.930411 kubelet[2776]: I1213 02:11:07.930098 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-bq88m" podStartSLOduration=35.930079114 podStartE2EDuration="35.930079114s" podCreationTimestamp="2024-12-13 02:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:11:07.928695623 +0000 UTC m=+50.407737201" watchObservedRunningTime="2024-12-13 02:11:07.930079114 +0000 UTC m=+50.409120692" Dec 13 02:11:08.124240 systemd-networkd[1361]: calia4ee023438c: Link UP Dec 13 02:11:08.126840 systemd-networkd[1361]: calia4ee023438c: Gained carrier Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:07.952 [INFO][4709] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:07.986 [INFO][4709] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0 coredns-7db6d8ff4d- kube-system be50f835-bcf1-4511-9cef-2190b068c860 778 0 2024-12-13 02:10:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-6-b597ddf835 coredns-7db6d8ff4d-7qwdw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia4ee023438c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:07.986 [INFO][4709] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.032 [INFO][4721] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" HandleID="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.048 [INFO][4721] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" HandleID="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003167d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-6-b597ddf835", "pod":"coredns-7db6d8ff4d-7qwdw", "timestamp":"2024-12-13 02:11:08.032199281 +0000 UTC"}, Hostname:"ci-4081-2-1-6-b597ddf835", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.049 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.049 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.049 [INFO][4721] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-6-b597ddf835' Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.053 [INFO][4721] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.059 [INFO][4721] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.075 [INFO][4721] ipam/ipam.go 489: Trying affinity for 192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.079 [INFO][4721] ipam/ipam.go 155: Attempting to load block cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.083 [INFO][4721] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.63.64/26 host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.084 [INFO][4721] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.63.64/26 handle="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.088 [INFO][4721] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188 Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.098 [INFO][4721] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.63.64/26 handle="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.114 [INFO][4721] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.63.70/26] block=192.168.63.64/26 handle="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.114 [INFO][4721] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.63.70/26] handle="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" host="ci-4081-2-1-6-b597ddf835" Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.114 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:08.155484 containerd[1477]: 2024-12-13 02:11:08.114 [INFO][4721] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.70/26] IPv6=[] ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" HandleID="k8s-pod-network.5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:08.157120 containerd[1477]: 2024-12-13 02:11:08.118 [INFO][4709] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"be50f835-bcf1-4511-9cef-2190b068c860", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"", Pod:"coredns-7db6d8ff4d-7qwdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4ee023438c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:08.157120 containerd[1477]: 2024-12-13 02:11:08.118 [INFO][4709] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.63.70/32] ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:08.157120 containerd[1477]: 2024-12-13 02:11:08.118 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4ee023438c ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:08.157120 containerd[1477]: 2024-12-13 02:11:08.128 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:08.157120 containerd[1477]: 2024-12-13 02:11:08.129 [INFO][4709] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"be50f835-bcf1-4511-9cef-2190b068c860", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188", Pod:"coredns-7db6d8ff4d-7qwdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4ee023438c", MAC:"92:8e:11:98:7f:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:08.157120 containerd[1477]: 2024-12-13 02:11:08.148 [INFO][4709] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188" Namespace="kube-system" Pod="coredns-7db6d8ff4d-7qwdw" WorkloadEndpoint="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:08.191882 containerd[1477]: time="2024-12-13T02:11:08.191555611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:11:08.191882 containerd[1477]: time="2024-12-13T02:11:08.191621651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:11:08.194389 containerd[1477]: time="2024-12-13T02:11:08.191797533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:08.194389 containerd[1477]: time="2024-12-13T02:11:08.193614867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:11:08.222318 systemd[1]: Started cri-containerd-5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188.scope - libcontainer container 5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188. Dec 13 02:11:08.262162 containerd[1477]: time="2024-12-13T02:11:08.262110164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:08.262464 containerd[1477]: time="2024-12-13T02:11:08.262415327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 02:11:08.263317 containerd[1477]: time="2024-12-13T02:11:08.263276853Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:08.267005 containerd[1477]: time="2024-12-13T02:11:08.266964842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:08.268975 containerd[1477]: time="2024-12-13T02:11:08.268849817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.189686665s" Dec 13 02:11:08.268975 containerd[1477]: time="2024-12-13T02:11:08.268897057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 02:11:08.271832 containerd[1477]: time="2024-12-13T02:11:08.271802800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7qwdw,Uid:be50f835-bcf1-4511-9cef-2190b068c860,Namespace:kube-system,Attempt:1,} returns sandbox id \"5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188\"" Dec 13 02:11:08.273295 containerd[1477]: time="2024-12-13T02:11:08.273266172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 02:11:08.275524 containerd[1477]: time="2024-12-13T02:11:08.275472309Z" level=info msg="CreateContainer within sandbox \"021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:11:08.279967 containerd[1477]: time="2024-12-13T02:11:08.279574261Z" level=info msg="CreateContainer within sandbox \"5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:11:08.306435 containerd[1477]: time="2024-12-13T02:11:08.305428344Z" level=info msg="CreateContainer within sandbox \"021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"98b6f83f25b2d4dbb8a15f1bbf5998b0be5fbf7d5cf628f39eafcbcbdbdd032b\"" Dec 13 02:11:08.307594 containerd[1477]: time="2024-12-13T02:11:08.307535881Z" level=info msg="StartContainer for \"98b6f83f25b2d4dbb8a15f1bbf5998b0be5fbf7d5cf628f39eafcbcbdbdd032b\"" Dec 13 02:11:08.323661 containerd[1477]: time="2024-12-13T02:11:08.323519246Z" level=info msg="CreateContainer within sandbox \"5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25b1e9b0a2103d02fdfb3fc89471798293f70e422e28c5d3993037a385eac438\"" Dec 13 02:11:08.327234 containerd[1477]: time="2024-12-13T02:11:08.327105754Z" level=info msg="StartContainer for \"25b1e9b0a2103d02fdfb3fc89471798293f70e422e28c5d3993037a385eac438\"" Dec 13 02:11:08.355666 systemd[1]: Started cri-containerd-98b6f83f25b2d4dbb8a15f1bbf5998b0be5fbf7d5cf628f39eafcbcbdbdd032b.scope - libcontainer container 98b6f83f25b2d4dbb8a15f1bbf5998b0be5fbf7d5cf628f39eafcbcbdbdd032b. Dec 13 02:11:08.370260 systemd[1]: Started cri-containerd-25b1e9b0a2103d02fdfb3fc89471798293f70e422e28c5d3993037a385eac438.scope - libcontainer container 25b1e9b0a2103d02fdfb3fc89471798293f70e422e28c5d3993037a385eac438. Dec 13 02:11:08.408877 containerd[1477]: time="2024-12-13T02:11:08.408781075Z" level=info msg="StartContainer for \"98b6f83f25b2d4dbb8a15f1bbf5998b0be5fbf7d5cf628f39eafcbcbdbdd032b\" returns successfully" Dec 13 02:11:08.413931 containerd[1477]: time="2024-12-13T02:11:08.413874714Z" level=info msg="StartContainer for \"25b1e9b0a2103d02fdfb3fc89471798293f70e422e28c5d3993037a385eac438\" returns successfully" Dec 13 02:11:08.414619 systemd-networkd[1361]: cali841130e7ce4: Gained IPv6LL Dec 13 02:11:08.925474 kubelet[2776]: I1213 02:11:08.925410 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7qwdw" podStartSLOduration=36.925390686 podStartE2EDuration="36.925390686s" podCreationTimestamp="2024-12-13 02:10:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:11:08.922579144 +0000 UTC m=+51.401620722" watchObservedRunningTime="2024-12-13 02:11:08.925390686 +0000 UTC m=+51.404432264" Dec 13 02:11:09.309231 systemd-networkd[1361]: calia4ee023438c: Gained IPv6LL Dec 13 02:11:09.585582 kubelet[2776]: I1213 02:11:09.584966 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b74fb4656-ldnns" podStartSLOduration=26.391390393000002 podStartE2EDuration="29.584945928s" podCreationTimestamp="2024-12-13 02:10:40 +0000 UTC" firstStartedPulling="2024-12-13 02:11:05.077891942 +0000 UTC m=+47.556933480" lastFinishedPulling="2024-12-13 02:11:08.271447477 +0000 UTC m=+50.750489015" observedRunningTime="2024-12-13 02:11:08.97177557 +0000 UTC m=+51.450817228" watchObservedRunningTime="2024-12-13 02:11:09.584945928 +0000 UTC m=+52.063987506" Dec 13 02:11:10.900104 kubelet[2776]: I1213 02:11:10.899963 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:11:11.542547 containerd[1477]: time="2024-12-13T02:11:11.542484135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:11.544811 containerd[1477]: time="2024-12-13T02:11:11.544768272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 02:11:11.546244 containerd[1477]: time="2024-12-13T02:11:11.546189443Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:11.551251 containerd[1477]: time="2024-12-13T02:11:11.551200481Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:11.553079 containerd[1477]: time="2024-12-13T02:11:11.552614412Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.27930416s" Dec 13 02:11:11.553079 containerd[1477]: time="2024-12-13T02:11:11.552653892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 02:11:11.556452 containerd[1477]: time="2024-12-13T02:11:11.554853669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 02:11:11.597580 containerd[1477]: time="2024-12-13T02:11:11.597539273Z" level=info msg="CreateContainer within sandbox \"a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 02:11:11.628859 containerd[1477]: time="2024-12-13T02:11:11.628734830Z" level=info msg="CreateContainer within sandbox \"a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805\"" Dec 13 02:11:11.630111 containerd[1477]: time="2024-12-13T02:11:11.629713837Z" level=info msg="StartContainer for \"0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805\"" Dec 13 02:11:11.665258 systemd[1]: Started cri-containerd-0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805.scope - libcontainer container 0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805. Dec 13 02:11:11.720558 containerd[1477]: time="2024-12-13T02:11:11.720490726Z" level=info msg="StartContainer for \"0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805\" returns successfully" Dec 13 02:11:11.831116 kernel: bpftool[5010]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 02:11:11.957866 kubelet[2776]: I1213 02:11:11.957267 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b5bbd5bc9-stdt4" podStartSLOduration=25.661984808 podStartE2EDuration="30.957250843s" podCreationTimestamp="2024-12-13 02:10:41 +0000 UTC" firstStartedPulling="2024-12-13 02:11:06.259559234 +0000 UTC m=+48.738600812" lastFinishedPulling="2024-12-13 02:11:11.554824869 +0000 UTC m=+54.033866847" observedRunningTime="2024-12-13 02:11:11.955465469 +0000 UTC m=+54.434507047" watchObservedRunningTime="2024-12-13 02:11:11.957250843 +0000 UTC m=+54.436292421" Dec 13 02:11:11.960941 containerd[1477]: time="2024-12-13T02:11:11.960735709Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:11.963772 containerd[1477]: time="2024-12-13T02:11:11.963683772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 02:11:11.966587 containerd[1477]: time="2024-12-13T02:11:11.966538353Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 411.650404ms" Dec 13 02:11:11.966841 containerd[1477]: time="2024-12-13T02:11:11.966592634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 02:11:11.972928 containerd[1477]: time="2024-12-13T02:11:11.971666352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 02:11:11.975013 containerd[1477]: time="2024-12-13T02:11:11.974817496Z" level=info msg="CreateContainer within sandbox \"300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 02:11:12.003893 containerd[1477]: time="2024-12-13T02:11:12.003838516Z" level=info msg="CreateContainer within sandbox \"300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6163c1f83a31f8941cdd74ac4845e32ce2574149f6c65168254e8d5feb9e971c\"" Dec 13 02:11:12.005249 containerd[1477]: time="2024-12-13T02:11:12.004435001Z" level=info msg="StartContainer for \"6163c1f83a31f8941cdd74ac4845e32ce2574149f6c65168254e8d5feb9e971c\"" Dec 13 02:11:12.052134 systemd[1]: Started cri-containerd-6163c1f83a31f8941cdd74ac4845e32ce2574149f6c65168254e8d5feb9e971c.scope - libcontainer container 6163c1f83a31f8941cdd74ac4845e32ce2574149f6c65168254e8d5feb9e971c. Dec 13 02:11:12.124429 containerd[1477]: time="2024-12-13T02:11:12.124315301Z" level=info msg="StartContainer for \"6163c1f83a31f8941cdd74ac4845e32ce2574149f6c65168254e8d5feb9e971c\" returns successfully" Dec 13 02:11:12.294002 systemd-networkd[1361]: vxlan.calico: Link UP Dec 13 02:11:12.294010 systemd-networkd[1361]: vxlan.calico: Gained carrier Dec 13 02:11:12.970700 kubelet[2776]: I1213 02:11:12.970601 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b74fb4656-ggqmp" podStartSLOduration=26.287638291 podStartE2EDuration="31.970582296s" podCreationTimestamp="2024-12-13 02:10:41 +0000 UTC" firstStartedPulling="2024-12-13 02:11:06.288317024 +0000 UTC m=+48.767358602" lastFinishedPulling="2024-12-13 02:11:11.971261029 +0000 UTC m=+54.450302607" observedRunningTime="2024-12-13 02:11:12.970102773 +0000 UTC m=+55.449144471" watchObservedRunningTime="2024-12-13 02:11:12.970582296 +0000 UTC m=+55.449623874" Dec 13 02:11:13.469430 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Dec 13 02:11:13.854294 containerd[1477]: time="2024-12-13T02:11:13.852489814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:13.855031 containerd[1477]: time="2024-12-13T02:11:13.854523469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 02:11:13.855031 containerd[1477]: time="2024-12-13T02:11:13.854613150Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:13.858193 containerd[1477]: time="2024-12-13T02:11:13.856843247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:13.858974 containerd[1477]: time="2024-12-13T02:11:13.858865062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.887147789s" Dec 13 02:11:13.859219 containerd[1477]: time="2024-12-13T02:11:13.859149904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 02:11:13.864576 containerd[1477]: time="2024-12-13T02:11:13.864520464Z" level=info msg="CreateContainer within sandbox \"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 02:11:13.890239 containerd[1477]: time="2024-12-13T02:11:13.888795564Z" level=info msg="CreateContainer within sandbox \"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"77c76784c6dd8e82e4170cff40c0c85ff64335126546d91ed5f3819e32022d1f\"" Dec 13 02:11:13.890624 containerd[1477]: time="2024-12-13T02:11:13.890586817Z" level=info msg="StartContainer for \"77c76784c6dd8e82e4170cff40c0c85ff64335126546d91ed5f3819e32022d1f\"" Dec 13 02:11:13.942563 systemd[1]: Started cri-containerd-77c76784c6dd8e82e4170cff40c0c85ff64335126546d91ed5f3819e32022d1f.scope - libcontainer container 77c76784c6dd8e82e4170cff40c0c85ff64335126546d91ed5f3819e32022d1f. Dec 13 02:11:13.994907 containerd[1477]: time="2024-12-13T02:11:13.994783192Z" level=info msg="StartContainer for \"77c76784c6dd8e82e4170cff40c0c85ff64335126546d91ed5f3819e32022d1f\" returns successfully" Dec 13 02:11:13.998068 containerd[1477]: time="2024-12-13T02:11:13.997950336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 02:11:15.682882 containerd[1477]: time="2024-12-13T02:11:15.682130920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:15.683341 containerd[1477]: time="2024-12-13T02:11:15.683311969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 02:11:15.683804 containerd[1477]: time="2024-12-13T02:11:15.683778732Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:15.686129 containerd[1477]: time="2024-12-13T02:11:15.686094149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 02:11:15.686770 containerd[1477]: time="2024-12-13T02:11:15.686726834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.688721378s" Dec 13 02:11:15.686885 containerd[1477]: time="2024-12-13T02:11:15.686865475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 02:11:15.692104 containerd[1477]: time="2024-12-13T02:11:15.692043033Z" level=info msg="CreateContainer within sandbox \"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 02:11:15.714266 containerd[1477]: time="2024-12-13T02:11:15.714220834Z" level=info msg="CreateContainer within sandbox \"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1e43dbc91170a7b5754296bc0a22adca6857cab89066949a5a9e08ed93de3b7d\"" Dec 13 02:11:15.716898 containerd[1477]: time="2024-12-13T02:11:15.715294442Z" level=info msg="StartContainer for \"1e43dbc91170a7b5754296bc0a22adca6857cab89066949a5a9e08ed93de3b7d\"" Dec 13 02:11:15.757531 systemd[1]: run-containerd-runc-k8s.io-1e43dbc91170a7b5754296bc0a22adca6857cab89066949a5a9e08ed93de3b7d-runc.lHc13j.mount: Deactivated successfully. Dec 13 02:11:15.768422 systemd[1]: Started cri-containerd-1e43dbc91170a7b5754296bc0a22adca6857cab89066949a5a9e08ed93de3b7d.scope - libcontainer container 1e43dbc91170a7b5754296bc0a22adca6857cab89066949a5a9e08ed93de3b7d. Dec 13 02:11:15.819378 containerd[1477]: time="2024-12-13T02:11:15.817834749Z" level=info msg="StartContainer for \"1e43dbc91170a7b5754296bc0a22adca6857cab89066949a5a9e08ed93de3b7d\" returns successfully" Dec 13 02:11:15.986865 kubelet[2776]: I1213 02:11:15.986316 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tz9nk" podStartSLOduration=25.607910529 podStartE2EDuration="34.986295257s" podCreationTimestamp="2024-12-13 02:10:41 +0000 UTC" firstStartedPulling="2024-12-13 02:11:06.309449714 +0000 UTC m=+48.788491292" lastFinishedPulling="2024-12-13 02:11:15.687834442 +0000 UTC m=+58.166876020" observedRunningTime="2024-12-13 02:11:15.984710325 +0000 UTC m=+58.463751903" watchObservedRunningTime="2024-12-13 02:11:15.986295257 +0000 UTC m=+58.465336835" Dec 13 02:11:16.752235 kubelet[2776]: I1213 02:11:16.752178 2776 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 02:11:16.765664 kubelet[2776]: I1213 02:11:16.765621 2776 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 02:11:17.643458 containerd[1477]: time="2024-12-13T02:11:17.643408854Z" level=info msg="StopPodSandbox for \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\"" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.699 [WARNING][5262] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db", Pod:"coredns-7db6d8ff4d-bq88m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali841130e7ce4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.699 [INFO][5262] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.699 [INFO][5262] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" iface="eth0" netns="" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.699 [INFO][5262] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.699 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.730 [INFO][5268] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.730 [INFO][5268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.730 [INFO][5268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.741 [WARNING][5268] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.741 [INFO][5268] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.743 [INFO][5268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:17.747416 containerd[1477]: 2024-12-13 02:11:17.745 [INFO][5262] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.749842 containerd[1477]: time="2024-12-13T02:11:17.747445758Z" level=info msg="TearDown network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\" successfully" Dec 13 02:11:17.749842 containerd[1477]: time="2024-12-13T02:11:17.747472678Z" level=info msg="StopPodSandbox for \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\" returns successfully" Dec 13 02:11:17.749842 containerd[1477]: time="2024-12-13T02:11:17.748086242Z" level=info msg="RemovePodSandbox for \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\"" Dec 13 02:11:17.753839 containerd[1477]: time="2024-12-13T02:11:17.753425921Z" level=info msg="Forcibly stopping sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\"" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.805 [WARNING][5286] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b54c01c8-e7f0-44bb-a06d-6790acc7d0cd", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"5270094399c5295cdf930d5e21d2f50bf7ffb60572f9f7fc1ac85081b47b38db", Pod:"coredns-7db6d8ff4d-bq88m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali841130e7ce4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.805 [INFO][5286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.805 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" iface="eth0" netns="" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.805 [INFO][5286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.805 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.830 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.831 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.831 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.841 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.841 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" HandleID="k8s-pod-network.dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--bq88m-eth0" Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.843 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:17.846373 containerd[1477]: 2024-12-13 02:11:17.844 [INFO][5286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209" Dec 13 02:11:17.847589 containerd[1477]: time="2024-12-13T02:11:17.846961349Z" level=info msg="TearDown network for sandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\" successfully" Dec 13 02:11:17.851925 containerd[1477]: time="2024-12-13T02:11:17.851742184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:11:17.851925 containerd[1477]: time="2024-12-13T02:11:17.851841224Z" level=info msg="RemovePodSandbox \"dae93e1a9ce8b15ea3882b78865ae09159f2bbc53b21db42715386e0e8aa3209\" returns successfully" Dec 13 02:11:17.852793 containerd[1477]: time="2024-12-13T02:11:17.852747951Z" level=info msg="StopPodSandbox for \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\"" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.906 [WARNING][5310] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0", GenerateName:"calico-kube-controllers-7b5bbd5bc9-", Namespace:"calico-system", SelfLink:"", UID:"8840a18a-f878-4077-b87c-3b4317d6d897", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5bbd5bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41", Pod:"calico-kube-controllers-7b5bbd5bc9-stdt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid89d505db14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.908 [INFO][5310] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.908 [INFO][5310] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" iface="eth0" netns="" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.908 [INFO][5310] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.908 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.932 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.933 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.933 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.942 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.942 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.944 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:17.947896 containerd[1477]: 2024-12-13 02:11:17.946 [INFO][5310] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:17.949248 containerd[1477]: time="2024-12-13T02:11:17.948788477Z" level=info msg="TearDown network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\" successfully" Dec 13 02:11:17.949248 containerd[1477]: time="2024-12-13T02:11:17.948830278Z" level=info msg="StopPodSandbox for \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\" returns successfully" Dec 13 02:11:17.950164 containerd[1477]: time="2024-12-13T02:11:17.949958206Z" level=info msg="RemovePodSandbox for \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\"" Dec 13 02:11:17.950164 containerd[1477]: time="2024-12-13T02:11:17.950004526Z" level=info msg="Forcibly stopping sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\"" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:17.992 [WARNING][5337] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0", GenerateName:"calico-kube-controllers-7b5bbd5bc9-", Namespace:"calico-system", SelfLink:"", UID:"8840a18a-f878-4077-b87c-3b4317d6d897", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b5bbd5bc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"a15e6d9164d8cbb51e8cb6edc3bdf6ceae236be85cf8bfb2599caf4b2587ce41", Pod:"calico-kube-controllers-7b5bbd5bc9-stdt4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid89d505db14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:17.992 [INFO][5337] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:17.992 [INFO][5337] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" iface="eth0" netns="" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:17.992 [INFO][5337] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:17.992 [INFO][5337] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:18.012 [INFO][5344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:18.012 [INFO][5344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:18.012 [INFO][5344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:18.026 [WARNING][5344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:18.026 [INFO][5344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" HandleID="k8s-pod-network.46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--kube--controllers--7b5bbd5bc9--stdt4-eth0" Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:18.028 [INFO][5344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.033148 containerd[1477]: 2024-12-13 02:11:18.030 [INFO][5337] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9" Dec 13 02:11:18.033148 containerd[1477]: time="2024-12-13T02:11:18.032551434Z" level=info msg="TearDown network for sandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\" successfully" Dec 13 02:11:18.036502 containerd[1477]: time="2024-12-13T02:11:18.036457782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:11:18.036720 containerd[1477]: time="2024-12-13T02:11:18.036669463Z" level=info msg="RemovePodSandbox \"46d4c1ec3b8710f38b7690b237948855da2f6d4f70a57e5969037c4348c9aca9\" returns successfully" Dec 13 02:11:18.037480 containerd[1477]: time="2024-12-13T02:11:18.037215067Z" level=info msg="StopPodSandbox for \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\"" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.085 [WARNING][5362] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"883deafc-f1bd-4933-895f-1acfab27941b", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165", Pod:"csi-node-driver-tz9nk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad7bfaf66d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.086 [INFO][5362] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.086 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" iface="eth0" netns="" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.086 [INFO][5362] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.086 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.107 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.107 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.107 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.116 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.116 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.118 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.122293 containerd[1477]: 2024-12-13 02:11:18.120 [INFO][5362] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.123378 containerd[1477]: time="2024-12-13T02:11:18.122348270Z" level=info msg="TearDown network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\" successfully" Dec 13 02:11:18.123378 containerd[1477]: time="2024-12-13T02:11:18.122379510Z" level=info msg="StopPodSandbox for \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\" returns successfully" Dec 13 02:11:18.123979 containerd[1477]: time="2024-12-13T02:11:18.123636359Z" level=info msg="RemovePodSandbox for \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\"" Dec 13 02:11:18.123979 containerd[1477]: time="2024-12-13T02:11:18.123676520Z" level=info msg="Forcibly stopping sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\"" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.171 [WARNING][5386] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"883deafc-f1bd-4933-895f-1acfab27941b", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"62e5a3f28807fc4a23429b5962614f91589387e79a0b0b7ee2b413b60df3c165", Pod:"csi-node-driver-tz9nk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliad7bfaf66d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.171 [INFO][5386] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.171 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" iface="eth0" netns="" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.172 [INFO][5386] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.172 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.194 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.194 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.194 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.204 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.204 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" HandleID="k8s-pod-network.5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Workload="ci--4081--2--1--6--b597ddf835-k8s-csi--node--driver--tz9nk-eth0" Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.207 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.211871 containerd[1477]: 2024-12-13 02:11:18.209 [INFO][5386] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585" Dec 13 02:11:18.211871 containerd[1477]: time="2024-12-13T02:11:18.211693503Z" level=info msg="TearDown network for sandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\" successfully" Dec 13 02:11:18.217315 containerd[1477]: time="2024-12-13T02:11:18.217129182Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:11:18.217315 containerd[1477]: time="2024-12-13T02:11:18.217208942Z" level=info msg="RemovePodSandbox \"5afcb22a7467031d663dc08ecb5d908a49a33c6ea5ebaa0cf94cd8c68acdf585\" returns successfully" Dec 13 02:11:18.217886 containerd[1477]: time="2024-12-13T02:11:18.217783986Z" level=info msg="StopPodSandbox for \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\"" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.257 [WARNING][5410] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe0d1aa3-9d9e-446d-9089-c44a1868215f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758", Pod:"calico-apiserver-5b74fb4656-ldnns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8144c80f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.258 [INFO][5410] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.258 [INFO][5410] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" iface="eth0" netns="" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.258 [INFO][5410] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.258 [INFO][5410] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.282 [INFO][5416] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.282 [INFO][5416] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.282 [INFO][5416] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.292 [WARNING][5416] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.292 [INFO][5416] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.293 [INFO][5416] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.297667 containerd[1477]: 2024-12-13 02:11:18.296 [INFO][5410] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.298319 containerd[1477]: time="2024-12-13T02:11:18.297645392Z" level=info msg="TearDown network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\" successfully" Dec 13 02:11:18.298319 containerd[1477]: time="2024-12-13T02:11:18.298121555Z" level=info msg="StopPodSandbox for \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\" returns successfully" Dec 13 02:11:18.298839 containerd[1477]: time="2024-12-13T02:11:18.298797840Z" level=info msg="RemovePodSandbox for \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\"" Dec 13 02:11:18.299000 containerd[1477]: time="2024-12-13T02:11:18.298944761Z" level=info msg="Forcibly stopping sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\"" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.344 [WARNING][5434] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"fe0d1aa3-9d9e-446d-9089-c44a1868215f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"021b8bd1ff7fa7a22730277647fe88059bc584232256ed8ef3a5b5ef1f74f758", Pod:"calico-apiserver-5b74fb4656-ldnns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf8144c80f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.346 [INFO][5434] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.346 [INFO][5434] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" iface="eth0" netns="" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.346 [INFO][5434] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.346 [INFO][5434] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.366 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.366 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.366 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.378 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.378 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" HandleID="k8s-pod-network.d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ldnns-eth0" Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.379 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.383108 containerd[1477]: 2024-12-13 02:11:18.381 [INFO][5434] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f" Dec 13 02:11:18.383108 containerd[1477]: time="2024-12-13T02:11:18.382715395Z" level=info msg="TearDown network for sandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\" successfully" Dec 13 02:11:18.392398 containerd[1477]: time="2024-12-13T02:11:18.392353943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:11:18.392690 containerd[1477]: time="2024-12-13T02:11:18.392582105Z" level=info msg="RemovePodSandbox \"d92c4b2e6c7bcb3419e1fef313597a389ed1573b1dbcbd932a072a655ef7994f\" returns successfully" Dec 13 02:11:18.393143 containerd[1477]: time="2024-12-13T02:11:18.393118388Z" level=info msg="StopPodSandbox for \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\"" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.434 [WARNING][5459] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"be50f835-bcf1-4511-9cef-2190b068c860", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188", Pod:"coredns-7db6d8ff4d-7qwdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4ee023438c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.434 [INFO][5459] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.435 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" iface="eth0" netns="" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.435 [INFO][5459] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.435 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.455 [INFO][5465] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.455 [INFO][5465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.455 [INFO][5465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.467 [WARNING][5465] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.467 [INFO][5465] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.469 [INFO][5465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.472773 containerd[1477]: 2024-12-13 02:11:18.471 [INFO][5459] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.473970 containerd[1477]: time="2024-12-13T02:11:18.472739392Z" level=info msg="TearDown network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\" successfully" Dec 13 02:11:18.473970 containerd[1477]: time="2024-12-13T02:11:18.473959961Z" level=info msg="StopPodSandbox for \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\" returns successfully" Dec 13 02:11:18.474490 containerd[1477]: time="2024-12-13T02:11:18.474468005Z" level=info msg="RemovePodSandbox for \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\"" Dec 13 02:11:18.474553 containerd[1477]: time="2024-12-13T02:11:18.474501325Z" level=info msg="Forcibly stopping sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\"" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.511 [WARNING][5483] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"be50f835-bcf1-4511-9cef-2190b068c860", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"5fb362a637c01e4b1f0567ba80ed1d70119c68896f102b39c551815884b68188", Pod:"coredns-7db6d8ff4d-7qwdw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4ee023438c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.512 [INFO][5483] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.512 [INFO][5483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" iface="eth0" netns="" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.512 [INFO][5483] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.512 [INFO][5483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.534 [INFO][5489] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.534 [INFO][5489] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.534 [INFO][5489] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.549 [WARNING][5489] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.550 [INFO][5489] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" HandleID="k8s-pod-network.05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Workload="ci--4081--2--1--6--b597ddf835-k8s-coredns--7db6d8ff4d--7qwdw-eth0" Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.552 [INFO][5489] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.554563 containerd[1477]: 2024-12-13 02:11:18.553 [INFO][5483] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8" Dec 13 02:11:18.555134 containerd[1477]: time="2024-12-13T02:11:18.554622733Z" level=info msg="TearDown network for sandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\" successfully" Dec 13 02:11:18.558555 containerd[1477]: time="2024-12-13T02:11:18.558512400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:11:18.558638 containerd[1477]: time="2024-12-13T02:11:18.558587761Z" level=info msg="RemovePodSandbox \"05aad66a0c750efb19596f857c20a68e86cdfa1486a2b6cf10e0b7f8df45c8b8\" returns successfully" Dec 13 02:11:18.559116 containerd[1477]: time="2024-12-13T02:11:18.559090284Z" level=info msg="StopPodSandbox for \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\"" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.601 [WARNING][5509] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607", Pod:"calico-apiserver-5b74fb4656-ggqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidddb02f40d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.601 [INFO][5509] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.601 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" iface="eth0" netns="" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.601 [INFO][5509] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.601 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.621 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.621 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.621 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.631 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.632 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.634 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.637271 containerd[1477]: 2024-12-13 02:11:18.635 [INFO][5509] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.637793 containerd[1477]: time="2024-12-13T02:11:18.637330798Z" level=info msg="TearDown network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\" successfully" Dec 13 02:11:18.637793 containerd[1477]: time="2024-12-13T02:11:18.637358479Z" level=info msg="StopPodSandbox for \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\" returns successfully" Dec 13 02:11:18.637889 containerd[1477]: time="2024-12-13T02:11:18.637847962Z" level=info msg="RemovePodSandbox for \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\"" Dec 13 02:11:18.637889 containerd[1477]: time="2024-12-13T02:11:18.637885442Z" level=info msg="Forcibly stopping sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\"" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.688 [WARNING][5534] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0", GenerateName:"calico-apiserver-5b74fb4656-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b4f4cc6-054d-4a2d-ad48-c71dc0a6d279", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 2, 10, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b74fb4656", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-6-b597ddf835", ContainerID:"300f6ae1cfa325df37b4bf8349001b1e2a99563f81d181481eb39f52cea1b607", Pod:"calico-apiserver-5b74fb4656-ggqmp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidddb02f40d6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.688 [INFO][5534] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.688 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" iface="eth0" netns="" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.688 [INFO][5534] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.688 [INFO][5534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.725 [INFO][5541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.726 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.726 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.741 [WARNING][5541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.741 [INFO][5541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" HandleID="k8s-pod-network.075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Workload="ci--4081--2--1--6--b597ddf835-k8s-calico--apiserver--5b74fb4656--ggqmp-eth0" Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.748 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 02:11:18.754256 containerd[1477]: 2024-12-13 02:11:18.751 [INFO][5534] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731" Dec 13 02:11:18.754256 containerd[1477]: time="2024-12-13T02:11:18.754222347Z" level=info msg="TearDown network for sandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\" successfully" Dec 13 02:11:18.761082 containerd[1477]: time="2024-12-13T02:11:18.759105541Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 02:11:18.761346 containerd[1477]: time="2024-12-13T02:11:18.761314837Z" level=info msg="RemovePodSandbox \"075eb03474d37134023a44077c11de391cdc03f86d48d9aa5be8eb07bd249731\" returns successfully" Dec 13 02:11:33.713841 systemd[1]: run-containerd-runc-k8s.io-0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805-runc.sQZQbQ.mount: Deactivated successfully. Dec 13 02:12:52.222521 systemd[1]: run-containerd-runc-k8s.io-0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805-runc.jurENz.mount: Deactivated successfully. Dec 13 02:15:03.102272 systemd[1]: run-containerd-runc-k8s.io-7189fd26dbb57656f7859b2b1d8ded98f20ec0e6477304b1401dbcd5490f1532-runc.FTIJhY.mount: Deactivated successfully. Dec 13 02:15:09.493380 systemd[1]: Started sshd@8-78.47.95.53:22-147.75.109.163:37426.service - OpenSSH per-connection server daemon (147.75.109.163:37426). Dec 13 02:15:10.479866 sshd[6059]: Accepted publickey for core from 147.75.109.163 port 37426 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:10.482497 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:10.488708 systemd-logind[1454]: New session 8 of user core. Dec 13 02:15:10.496448 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 02:15:11.261928 sshd[6059]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:11.267464 systemd[1]: sshd@8-78.47.95.53:22-147.75.109.163:37426.service: Deactivated successfully. Dec 13 02:15:11.271731 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:15:11.273033 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:15:11.274414 systemd-logind[1454]: Removed session 8. Dec 13 02:15:16.444877 systemd[1]: Started sshd@9-78.47.95.53:22-147.75.109.163:50492.service - OpenSSH per-connection server daemon (147.75.109.163:50492). Dec 13 02:15:17.424866 sshd[6078]: Accepted publickey for core from 147.75.109.163 port 50492 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:17.426968 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:17.432584 systemd-logind[1454]: New session 9 of user core. Dec 13 02:15:17.439236 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 02:15:18.181737 sshd[6078]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:18.187820 systemd[1]: sshd@9-78.47.95.53:22-147.75.109.163:50492.service: Deactivated successfully. Dec 13 02:15:18.192465 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:15:18.195198 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:15:18.197708 systemd-logind[1454]: Removed session 9. Dec 13 02:15:23.357556 systemd[1]: Started sshd@10-78.47.95.53:22-147.75.109.163:50508.service - OpenSSH per-connection server daemon (147.75.109.163:50508). Dec 13 02:15:24.330798 sshd[6112]: Accepted publickey for core from 147.75.109.163 port 50508 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:24.333248 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:24.339904 systemd-logind[1454]: New session 10 of user core. Dec 13 02:15:24.345615 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 02:15:25.083416 sshd[6112]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:25.090017 systemd[1]: sshd@10-78.47.95.53:22-147.75.109.163:50508.service: Deactivated successfully. Dec 13 02:15:25.094552 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:15:25.096009 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:15:25.098172 systemd-logind[1454]: Removed session 10. Dec 13 02:15:25.260421 systemd[1]: Started sshd@11-78.47.95.53:22-147.75.109.163:50516.service - OpenSSH per-connection server daemon (147.75.109.163:50516). Dec 13 02:15:26.253323 sshd[6126]: Accepted publickey for core from 147.75.109.163 port 50516 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:26.254967 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:26.261821 systemd-logind[1454]: New session 11 of user core. Dec 13 02:15:26.270406 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 02:15:27.049338 sshd[6126]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:27.055600 systemd[1]: sshd@11-78.47.95.53:22-147.75.109.163:50516.service: Deactivated successfully. Dec 13 02:15:27.060889 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:15:27.063788 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:15:27.065242 systemd-logind[1454]: Removed session 11. Dec 13 02:15:27.223216 systemd[1]: Started sshd@12-78.47.95.53:22-147.75.109.163:35868.service - OpenSSH per-connection server daemon (147.75.109.163:35868). Dec 13 02:15:28.221501 sshd[6136]: Accepted publickey for core from 147.75.109.163 port 35868 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:28.224444 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:28.233574 systemd-logind[1454]: New session 12 of user core. Dec 13 02:15:28.241441 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 02:15:28.983243 sshd[6136]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:28.987430 systemd[1]: sshd@12-78.47.95.53:22-147.75.109.163:35868.service: Deactivated successfully. Dec 13 02:15:28.991543 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:15:28.993445 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:15:28.995758 systemd-logind[1454]: Removed session 12. Dec 13 02:15:33.712007 systemd[1]: run-containerd-runc-k8s.io-0d124dee72fe87e9f22e559e3c2855dfc1e230e43f0d9e5f84717eeb96c53805-runc.hy75Yr.mount: Deactivated successfully. Dec 13 02:15:34.161617 systemd[1]: Started sshd@13-78.47.95.53:22-147.75.109.163:35882.service - OpenSSH per-connection server daemon (147.75.109.163:35882). Dec 13 02:15:35.136756 sshd[6193]: Accepted publickey for core from 147.75.109.163 port 35882 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:35.139013 sshd[6193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:35.146610 systemd-logind[1454]: New session 13 of user core. Dec 13 02:15:35.150293 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 02:15:35.899499 sshd[6193]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:35.906319 systemd[1]: sshd@13-78.47.95.53:22-147.75.109.163:35882.service: Deactivated successfully. Dec 13 02:15:35.912249 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:15:35.913359 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:15:35.914729 systemd-logind[1454]: Removed session 13. Dec 13 02:15:36.082389 systemd[1]: Started sshd@14-78.47.95.53:22-147.75.109.163:35890.service - OpenSSH per-connection server daemon (147.75.109.163:35890). Dec 13 02:15:37.062412 sshd[6206]: Accepted publickey for core from 147.75.109.163 port 35890 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:37.065644 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:37.072562 systemd-logind[1454]: New session 14 of user core. Dec 13 02:15:37.079372 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 02:15:38.033576 sshd[6206]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:38.040215 systemd[1]: sshd@14-78.47.95.53:22-147.75.109.163:35890.service: Deactivated successfully. Dec 13 02:15:38.043447 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:15:38.045301 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:15:38.047665 systemd-logind[1454]: Removed session 14. Dec 13 02:15:38.213402 systemd[1]: Started sshd@15-78.47.95.53:22-147.75.109.163:50472.service - OpenSSH per-connection server daemon (147.75.109.163:50472). Dec 13 02:15:39.207923 sshd[6217]: Accepted publickey for core from 147.75.109.163 port 50472 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:39.210014 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:39.214771 systemd-logind[1454]: New session 15 of user core. Dec 13 02:15:39.226286 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 02:15:41.907546 sshd[6217]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:41.913973 systemd[1]: sshd@15-78.47.95.53:22-147.75.109.163:50472.service: Deactivated successfully. Dec 13 02:15:41.917329 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:15:41.918341 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:15:41.920103 systemd-logind[1454]: Removed session 15. Dec 13 02:15:42.079419 systemd[1]: Started sshd@16-78.47.95.53:22-147.75.109.163:50478.service - OpenSSH per-connection server daemon (147.75.109.163:50478). Dec 13 02:15:43.066541 sshd[6235]: Accepted publickey for core from 147.75.109.163 port 50478 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:43.068539 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:43.077990 systemd-logind[1454]: New session 16 of user core. Dec 13 02:15:43.085253 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 02:15:43.954435 sshd[6235]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:43.960182 systemd[1]: sshd@16-78.47.95.53:22-147.75.109.163:50478.service: Deactivated successfully. Dec 13 02:15:43.963803 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:15:43.964867 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:15:43.966266 systemd-logind[1454]: Removed session 16. Dec 13 02:15:44.133563 systemd[1]: Started sshd@17-78.47.95.53:22-147.75.109.163:50484.service - OpenSSH per-connection server daemon (147.75.109.163:50484). Dec 13 02:15:45.112304 sshd[6247]: Accepted publickey for core from 147.75.109.163 port 50484 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:45.114564 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:45.121680 systemd-logind[1454]: New session 17 of user core. Dec 13 02:15:45.127319 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 02:15:45.867600 sshd[6247]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:45.873453 systemd[1]: sshd@17-78.47.95.53:22-147.75.109.163:50484.service: Deactivated successfully. Dec 13 02:15:45.876350 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:15:45.877350 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:15:45.879012 systemd-logind[1454]: Removed session 17. Dec 13 02:15:51.046407 systemd[1]: Started sshd@18-78.47.95.53:22-147.75.109.163:58708.service - OpenSSH per-connection server daemon (147.75.109.163:58708). Dec 13 02:15:52.027025 sshd[6263]: Accepted publickey for core from 147.75.109.163 port 58708 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:52.029200 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:52.035556 systemd-logind[1454]: New session 18 of user core. Dec 13 02:15:52.043467 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 02:15:52.776683 sshd[6263]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:52.781453 systemd[1]: sshd@18-78.47.95.53:22-147.75.109.163:58708.service: Deactivated successfully. Dec 13 02:15:52.784821 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:15:52.787514 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:15:52.789154 systemd-logind[1454]: Removed session 18. Dec 13 02:15:57.950473 systemd[1]: Started sshd@19-78.47.95.53:22-147.75.109.163:33970.service - OpenSSH per-connection server daemon (147.75.109.163:33970). Dec 13 02:15:58.943991 sshd[6307]: Accepted publickey for core from 147.75.109.163 port 33970 ssh2: RSA SHA256:hso9grF+8nrdZMT2QLkyhGQJvfnPNh+aDCqCZE8JRV8 Dec 13 02:15:58.946279 sshd[6307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 02:15:58.952142 systemd-logind[1454]: New session 19 of user core. Dec 13 02:15:58.961310 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 02:15:59.717305 sshd[6307]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:59.721120 systemd[1]: sshd@19-78.47.95.53:22-147.75.109.163:33970.service: Deactivated successfully. Dec 13 02:15:59.724103 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:15:59.724985 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:15:59.727047 systemd-logind[1454]: Removed session 19. Dec 13 02:16:15.180583 systemd[1]: cri-containerd-eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236.scope: Deactivated successfully. Dec 13 02:16:15.181645 systemd[1]: cri-containerd-eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236.scope: Consumed 5.556s CPU time. Dec 13 02:16:15.207066 containerd[1477]: time="2024-12-13T02:16:15.206844254Z" level=info msg="shim disconnected" id=eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236 namespace=k8s.io Dec 13 02:16:15.207066 containerd[1477]: time="2024-12-13T02:16:15.206932136Z" level=warning msg="cleaning up after shim disconnected" id=eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236 namespace=k8s.io Dec 13 02:16:15.207066 containerd[1477]: time="2024-12-13T02:16:15.206941136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:15.212226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236-rootfs.mount: Deactivated successfully. Dec 13 02:16:15.467904 systemd[1]: cri-containerd-d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11.scope: Deactivated successfully. Dec 13 02:16:15.468464 systemd[1]: cri-containerd-d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11.scope: Consumed 6.527s CPU time, 22.3M memory peak, 0B memory swap peak. Dec 13 02:16:15.500572 containerd[1477]: time="2024-12-13T02:16:15.500316508Z" level=info msg="shim disconnected" id=d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11 namespace=k8s.io Dec 13 02:16:15.500572 containerd[1477]: time="2024-12-13T02:16:15.500379430Z" level=warning msg="cleaning up after shim disconnected" id=d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11 namespace=k8s.io Dec 13 02:16:15.500572 containerd[1477]: time="2024-12-13T02:16:15.500388230Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:15.502075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11-rootfs.mount: Deactivated successfully. Dec 13 02:16:15.614887 kubelet[2776]: E1213 02:16:15.614824 2776 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41718->10.0.0.2:2379: read: connection timed out" Dec 13 02:16:15.791693 kubelet[2776]: I1213 02:16:15.789504 2776 scope.go:117] "RemoveContainer" containerID="d9ea98611d191f70f7a7e9086ce02a1334c83fb4a9ce965dd0139619c3e35d11" Dec 13 02:16:15.794433 kubelet[2776]: I1213 02:16:15.794395 2776 scope.go:117] "RemoveContainer" containerID="eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236" Dec 13 02:16:15.800625 containerd[1477]: time="2024-12-13T02:16:15.800565054Z" level=info msg="CreateContainer within sandbox \"95dd8bb7471e66ec51f43a91edd8b5ea05256fad5a0e5b503796ec49e9b870b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 02:16:15.804031 containerd[1477]: time="2024-12-13T02:16:15.803967360Z" level=info msg="CreateContainer within sandbox \"9937092571709b39883fa5729e03c5b72e7ea2eca75d5a52c53d0017497f9281\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 02:16:15.821021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49986357.mount: Deactivated successfully. Dec 13 02:16:15.822007 containerd[1477]: time="2024-12-13T02:16:15.821876308Z" level=info msg="CreateContainer within sandbox \"9937092571709b39883fa5729e03c5b72e7ea2eca75d5a52c53d0017497f9281\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4\"" Dec 13 02:16:15.823666 containerd[1477]: time="2024-12-13T02:16:15.822848167Z" level=info msg="StartContainer for \"988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4\"" Dec 13 02:16:15.826072 containerd[1477]: time="2024-12-13T02:16:15.825674221Z" level=info msg="CreateContainer within sandbox \"95dd8bb7471e66ec51f43a91edd8b5ea05256fad5a0e5b503796ec49e9b870b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"472594b471573e4763327838fa732821520677aefc98da6ee30d1f05d89eb24c\"" Dec 13 02:16:15.826559 containerd[1477]: time="2024-12-13T02:16:15.826181351Z" level=info msg="StartContainer for \"472594b471573e4763327838fa732821520677aefc98da6ee30d1f05d89eb24c\"" Dec 13 02:16:15.861300 systemd[1]: Started cri-containerd-472594b471573e4763327838fa732821520677aefc98da6ee30d1f05d89eb24c.scope - libcontainer container 472594b471573e4763327838fa732821520677aefc98da6ee30d1f05d89eb24c. Dec 13 02:16:15.870261 systemd[1]: Started cri-containerd-988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4.scope - libcontainer container 988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4. Dec 13 02:16:15.902804 containerd[1477]: time="2024-12-13T02:16:15.902747517Z" level=info msg="StartContainer for \"472594b471573e4763327838fa732821520677aefc98da6ee30d1f05d89eb24c\" returns successfully" Dec 13 02:16:15.925086 containerd[1477]: time="2024-12-13T02:16:15.924700783Z" level=info msg="StartContainer for \"988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4\" returns successfully" Dec 13 02:16:20.484584 systemd[1]: cri-containerd-988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4.scope: Deactivated successfully. Dec 13 02:16:20.509789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4-rootfs.mount: Deactivated successfully. Dec 13 02:16:20.517409 containerd[1477]: time="2024-12-13T02:16:20.517309026Z" level=info msg="shim disconnected" id=988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4 namespace=k8s.io Dec 13 02:16:20.518091 containerd[1477]: time="2024-12-13T02:16:20.517569671Z" level=warning msg="cleaning up after shim disconnected" id=988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4 namespace=k8s.io Dec 13 02:16:20.518091 containerd[1477]: time="2024-12-13T02:16:20.517587031Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:20.543049 kubelet[2776]: E1213 02:16:20.542709 2776 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41536->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-2-1-6-b597ddf835.18109ae635ef9d99 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-2-1-6-b597ddf835,UID:e7acecebd0e41400069a473591a1dd87,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-6-b597ddf835,},FirstTimestamp:2024-12-13 02:16:10.075692441 +0000 UTC m=+352.554734059,LastTimestamp:2024-12-13 02:16:10.075692441 +0000 UTC m=+352.554734059,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-6-b597ddf835,}" Dec 13 02:16:20.818219 kubelet[2776]: I1213 02:16:20.818046 2776 scope.go:117] "RemoveContainer" containerID="eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236" Dec 13 02:16:20.818505 kubelet[2776]: I1213 02:16:20.818486 2776 scope.go:117] "RemoveContainer" containerID="988fdd96fde62808635a0a06c6a23f7ea5992c344dc5ba2b3fb02df70e67acc4" Dec 13 02:16:20.818804 kubelet[2776]: E1213 02:16:20.818774 2776 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7bc55997bb-hdvvm_tigera-operator(03671b8d-753c-4450-a3db-86e39179fd19)\"" pod="tigera-operator/tigera-operator-7bc55997bb-hdvvm" podUID="03671b8d-753c-4450-a3db-86e39179fd19" Dec 13 02:16:20.820709 containerd[1477]: time="2024-12-13T02:16:20.820667775Z" level=info msg="RemoveContainer for \"eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236\"" Dec 13 02:16:20.825338 containerd[1477]: time="2024-12-13T02:16:20.825209341Z" level=info msg="RemoveContainer for \"eaf1a846f817c32c25c6274339c57ba2c1b8434ed9ec82373f4ea06278f05236\" returns successfully" Dec 13 02:16:21.204454 systemd[1]: cri-containerd-2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a.scope: Deactivated successfully. Dec 13 02:16:21.205557 systemd[1]: cri-containerd-2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a.scope: Consumed 2.794s CPU time, 16.8M memory peak, 0B memory swap peak. Dec 13 02:16:21.229845 containerd[1477]: time="2024-12-13T02:16:21.229642904Z" level=info msg="shim disconnected" id=2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a namespace=k8s.io Dec 13 02:16:21.229845 containerd[1477]: time="2024-12-13T02:16:21.229696905Z" level=warning msg="cleaning up after shim disconnected" id=2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a namespace=k8s.io Dec 13 02:16:21.229845 containerd[1477]: time="2024-12-13T02:16:21.229706346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 02:16:21.232354 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a-rootfs.mount: Deactivated successfully. Dec 13 02:16:21.823127 kubelet[2776]: I1213 02:16:21.823084 2776 scope.go:117] "RemoveContainer" containerID="2c6c5b44a653175e5500b9e9412166705f63991366a0d331575cbca5b0aa208a" Dec 13 02:16:21.827084 containerd[1477]: time="2024-12-13T02:16:21.826148595Z" level=info msg="CreateContainer within sandbox \"3badae0f70c8dc8150fdd2355dae742f32ea33d20abfac97f9e5fb80a9079f5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 02:16:21.850497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869197650.mount: Deactivated successfully. Dec 13 02:16:21.853535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3404429225.mount: Deactivated successfully. Dec 13 02:16:21.858631 containerd[1477]: time="2024-12-13T02:16:21.858568047Z" level=info msg="CreateContainer within sandbox \"3badae0f70c8dc8150fdd2355dae742f32ea33d20abfac97f9e5fb80a9079f5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"796daabe4c4e735c1d69c9da73a0757317ab92467ab59e0634cb9b63bcdd4d45\"" Dec 13 02:16:21.860721 containerd[1477]: time="2024-12-13T02:16:21.859271540Z" level=info msg="StartContainer for \"796daabe4c4e735c1d69c9da73a0757317ab92467ab59e0634cb9b63bcdd4d45\"" Dec 13 02:16:21.891284 systemd[1]: Started cri-containerd-796daabe4c4e735c1d69c9da73a0757317ab92467ab59e0634cb9b63bcdd4d45.scope - libcontainer container 796daabe4c4e735c1d69c9da73a0757317ab92467ab59e0634cb9b63bcdd4d45. Dec 13 02:16:21.923253 containerd[1477]: time="2024-12-13T02:16:21.923202026Z" level=info msg="StartContainer for \"796daabe4c4e735c1d69c9da73a0757317ab92467ab59e0634cb9b63bcdd4d45\" returns successfully"