Jan 13 20:31:40.895081 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:31:40.895106 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:31:40.895116 kernel: KASLR enabled Jan 13 20:31:40.895122 kernel: efi: EFI v2.7 by EDK II Jan 13 20:31:40.895127 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x132303d98 Jan 13 20:31:40.895133 kernel: random: crng init done Jan 13 20:31:40.895140 kernel: secureboot: Secure boot disabled Jan 13 20:31:40.895146 kernel: ACPI: Early table checksum verification disabled Jan 13 20:31:40.895152 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Jan 13 20:31:40.895158 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:31:40.895166 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895172 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895178 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895184 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895191 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895199 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895206 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895212 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895218 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:31:40.895224 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 20:31:40.895231 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 20:31:40.895237 kernel: NUMA: Failed to initialise from firmware Jan 13 20:31:40.895243 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:31:40.895250 kernel: NUMA: NODE_DATA [mem 0x13981e800-0x139823fff] Jan 13 20:31:40.895269 kernel: Zone ranges: Jan 13 20:31:40.895276 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 20:31:40.895285 kernel: DMA32 empty Jan 13 20:31:40.895291 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 13 20:31:40.895297 kernel: Movable zone start for each node Jan 13 20:31:40.895303 kernel: Early memory node ranges Jan 13 20:31:40.895310 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Jan 13 20:31:40.895316 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Jan 13 20:31:40.895322 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Jan 13 20:31:40.895328 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Jan 13 20:31:40.895334 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Jan 13 20:31:40.895340 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 13 20:31:40.895347 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 13 20:31:40.895355 kernel: psci: probing for conduit method from ACPI. Jan 13 20:31:40.895361 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:31:40.895384 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:31:40.895395 kernel: psci: Trusted OS migration not required Jan 13 20:31:40.895402 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:31:40.895409 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:31:40.895417 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:31:40.895424 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:31:40.895431 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 13 20:31:40.895437 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:31:40.895452 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:31:40.895459 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:31:40.895466 kernel: CPU features: detected: Spectre-v4 Jan 13 20:31:40.895472 kernel: CPU features: detected: Spectre-BHB Jan 13 20:31:40.895479 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:31:40.895486 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:31:40.895493 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:31:40.895503 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:31:40.895509 kernel: alternatives: applying boot alternatives Jan 13 20:31:40.895517 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:31:40.895524 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:31:40.895531 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:31:40.895538 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:31:40.895544 kernel: Fallback order for Node 0: 0 Jan 13 20:31:40.895551 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Jan 13 20:31:40.895558 kernel: Policy zone: Normal Jan 13 20:31:40.895565 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:31:40.895571 kernel: software IO TLB: area num 2. Jan 13 20:31:40.895580 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Jan 13 20:31:40.895587 kernel: Memory: 3881332K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 214668K reserved, 0K cma-reserved) Jan 13 20:31:40.895594 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 20:31:40.895600 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:31:40.895608 kernel: rcu: RCU event tracing is enabled. Jan 13 20:31:40.895615 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 20:31:40.895622 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:31:40.895629 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:31:40.895636 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:31:40.895642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 20:31:40.895649 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:31:40.895657 kernel: GICv3: 256 SPIs implemented Jan 13 20:31:40.895664 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:31:40.895670 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:31:40.895677 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:31:40.895684 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:31:40.895691 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:31:40.895697 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:31:40.895704 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:31:40.895711 kernel: GICv3: using LPI property table @0x00000001000e0000 Jan 13 20:31:40.895718 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Jan 13 20:31:40.895725 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:31:40.895733 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:31:40.895740 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:31:40.895747 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:31:40.895754 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:31:40.895760 kernel: Console: colour dummy device 80x25 Jan 13 20:31:40.895768 kernel: ACPI: Core revision 20230628 Jan 13 20:31:40.895775 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:31:40.895782 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:31:40.895789 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:31:40.895795 kernel: landlock: Up and running. Jan 13 20:31:40.895804 kernel: SELinux: Initializing. Jan 13 20:31:40.895811 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:31:40.895818 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:31:40.895825 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:40.895832 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 20:31:40.895844 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:31:40.895855 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:31:40.895863 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:31:40.895871 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:31:40.895881 kernel: Remapping and enabling EFI services. Jan 13 20:31:40.895888 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:31:40.895895 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:31:40.895902 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:31:40.895909 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Jan 13 20:31:40.895916 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:31:40.895923 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:31:40.895930 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 20:31:40.895938 kernel: SMP: Total of 2 processors activated. Jan 13 20:31:40.895945 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:31:40.895954 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:31:40.895961 kernel: CPU features: detected: Common not Private translations Jan 13 20:31:40.895973 kernel: CPU features: detected: CRC32 instructions Jan 13 20:31:40.895982 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:31:40.895989 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:31:40.896001 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:31:40.896008 kernel: CPU features: detected: Privileged Access Never Jan 13 20:31:40.896017 kernel: CPU features: detected: RAS Extension Support Jan 13 20:31:40.896026 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:31:40.896077 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:31:40.896086 kernel: alternatives: applying system-wide alternatives Jan 13 20:31:40.896095 kernel: devtmpfs: initialized Jan 13 20:31:40.896103 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:31:40.896112 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 20:31:40.896120 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:31:40.896128 kernel: SMBIOS 3.0.0 present. Jan 13 20:31:40.896135 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 13 20:31:40.896145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:31:40.896153 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:31:40.896161 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:31:40.896168 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:31:40.896175 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:31:40.896183 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 13 20:31:40.896190 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:31:40.896198 kernel: cpuidle: using governor menu Jan 13 20:31:40.896205 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:31:40.896214 kernel: ASID allocator initialised with 32768 entries Jan 13 20:31:40.896222 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:31:40.896229 kernel: Serial: AMBA PL011 UART driver Jan 13 20:31:40.896236 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:31:40.896250 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:31:40.896264 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:31:40.896272 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:31:40.896279 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:31:40.896287 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:31:40.896297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:31:40.896304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:31:40.896312 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:31:40.896319 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:31:40.896326 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:31:40.896333 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:31:40.896341 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:31:40.896348 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:31:40.896355 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:31:40.896364 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:31:40.896372 kernel: ACPI: Interpreter enabled Jan 13 20:31:40.896379 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:31:40.896386 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:31:40.896394 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:31:40.896401 kernel: printk: console [ttyAMA0] enabled Jan 13 20:31:40.896409 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:31:40.896578 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:31:40.896658 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:31:40.896725 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:31:40.896788 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:31:40.896852 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:31:40.896861 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:31:40.896869 kernel: PCI host bridge to bus 0000:00 Jan 13 20:31:40.896939 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:31:40.897000 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:31:40.897188 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:31:40.897301 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:31:40.897403 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:31:40.897483 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Jan 13 20:31:40.897552 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Jan 13 20:31:40.897619 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:31:40.897701 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.897768 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Jan 13 20:31:40.897842 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.897910 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Jan 13 20:31:40.897988 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.900226 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Jan 13 20:31:40.900361 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.900431 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Jan 13 20:31:40.900507 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.900573 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Jan 13 20:31:40.900649 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.900715 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Jan 13 20:31:40.900792 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.900857 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Jan 13 20:31:40.900930 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.900996 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Jan 13 20:31:40.901164 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Jan 13 20:31:40.901238 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Jan 13 20:31:40.901362 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Jan 13 20:31:40.901432 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Jan 13 20:31:40.901516 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:31:40.901583 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Jan 13 20:31:40.901650 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:31:40.901721 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:31:40.901797 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 13 20:31:40.901881 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Jan 13 20:31:40.901961 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Jan 13 20:31:40.904082 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Jan 13 20:31:40.904231 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 20:31:40.904334 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Jan 13 20:31:40.904406 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 20:31:40.904491 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 13 20:31:40.904560 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 20:31:40.904638 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Jan 13 20:31:40.904708 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Jan 13 20:31:40.904778 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:31:40.904856 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Jan 13 20:31:40.904928 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Jan 13 20:31:40.904996 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Jan 13 20:31:40.905138 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Jan 13 20:31:40.905214 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 20:31:40.905328 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:31:40.905399 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 20:31:40.905469 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 20:31:40.905541 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 20:31:40.905605 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 20:31:40.905675 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 20:31:40.905742 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:31:40.905808 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 20:31:40.905887 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 20:31:40.905970 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 20:31:40.906065 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 20:31:40.906139 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 20:31:40.907674 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 20:31:40.907761 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Jan 13 20:31:40.907836 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 20:31:40.907903 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:31:40.907971 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 20:31:40.908058 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 20:31:40.908141 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:31:40.908212 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 13 20:31:40.908329 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 20:31:40.908401 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:31:40.908465 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 20:31:40.908535 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 20:31:40.908603 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:31:40.908674 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 20:31:40.908752 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Jan 13 20:31:40.908842 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:31:40.908915 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Jan 13 20:31:40.908982 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:31:40.909068 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Jan 13 20:31:40.909138 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:31:40.909210 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Jan 13 20:31:40.909292 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:31:40.909362 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Jan 13 20:31:40.909427 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:31:40.909494 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Jan 13 20:31:40.909559 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:31:40.909625 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Jan 13 20:31:40.909694 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:31:40.909762 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Jan 13 20:31:40.909846 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:31:40.910472 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Jan 13 20:31:40.910556 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:31:40.910631 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Jan 13 20:31:40.910700 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Jan 13 20:31:40.912660 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Jan 13 20:31:40.912781 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 13 20:31:40.912857 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Jan 13 20:31:40.912972 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 13 20:31:40.913406 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Jan 13 20:31:40.913514 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 13 20:31:40.913712 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Jan 13 20:31:40.913803 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 13 20:31:40.913939 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Jan 13 20:31:40.914007 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 13 20:31:40.914651 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Jan 13 20:31:40.914731 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 13 20:31:40.914801 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Jan 13 20:31:40.914867 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 13 20:31:40.914936 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Jan 13 20:31:40.915001 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 13 20:31:40.916899 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Jan 13 20:31:40.916983 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Jan 13 20:31:40.917084 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Jan 13 20:31:40.917165 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Jan 13 20:31:40.917234 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:31:40.917326 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Jan 13 20:31:40.917400 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 13 20:31:40.917468 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 13 20:31:40.917543 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 20:31:40.917608 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:31:40.917680 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Jan 13 20:31:40.917749 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 13 20:31:40.917818 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 13 20:31:40.917885 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 20:31:40.917949 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:31:40.918021 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 20:31:40.918109 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Jan 13 20:31:40.918202 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 13 20:31:40.918316 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 13 20:31:40.918398 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 20:31:40.918474 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:31:40.918552 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Jan 13 20:31:40.918622 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 13 20:31:40.918687 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 13 20:31:40.918753 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 20:31:40.918817 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:31:40.918892 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Jan 13 20:31:40.918960 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 13 20:31:40.919029 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 13 20:31:40.919117 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 20:31:40.919189 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:31:40.919275 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Jan 13 20:31:40.919347 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Jan 13 20:31:40.919417 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 13 20:31:40.919484 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 13 20:31:40.919547 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 20:31:40.919617 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:31:40.919691 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Jan 13 20:31:40.919759 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Jan 13 20:31:40.919845 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Jan 13 20:31:40.919921 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 13 20:31:40.919988 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 13 20:31:40.920814 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 20:31:40.920899 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:31:40.920978 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 13 20:31:40.921063 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 13 20:31:40.921135 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 20:31:40.921202 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:31:40.921286 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 13 20:31:40.921356 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 13 20:31:40.921422 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 20:31:40.921502 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:31:40.921578 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:31:40.921639 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:31:40.921697 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:31:40.921774 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 13 20:31:40.921846 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 20:31:40.921907 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 20:31:40.921981 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 13 20:31:40.922060 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 20:31:40.922123 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 20:31:40.922195 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 13 20:31:40.922266 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 20:31:40.922334 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 20:31:40.922404 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 13 20:31:40.922469 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 20:31:40.922531 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 20:31:40.922611 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 13 20:31:40.922674 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 20:31:40.922737 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 20:31:40.922805 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 13 20:31:40.922869 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 20:31:40.922929 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 20:31:40.922998 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 13 20:31:40.923073 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 20:31:40.923137 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 20:31:40.923211 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 13 20:31:40.923334 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 20:31:40.923403 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 20:31:40.923473 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 13 20:31:40.923535 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 20:31:40.923596 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 20:31:40.923606 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:31:40.923619 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:31:40.923627 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:31:40.923635 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:31:40.923642 kernel: iommu: Default domain type: Translated Jan 13 20:31:40.923650 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:31:40.923658 kernel: efivars: Registered efivars operations Jan 13 20:31:40.923666 kernel: vgaarb: loaded Jan 13 20:31:40.923673 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:31:40.923681 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:31:40.923691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:31:40.923699 kernel: pnp: PnP ACPI init Jan 13 20:31:40.923781 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:31:40.923792 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:31:40.923800 kernel: NET: Registered PF_INET protocol family Jan 13 20:31:40.923808 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:31:40.923821 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:31:40.923830 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:31:40.923842 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:31:40.923850 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:31:40.923858 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:31:40.923866 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:31:40.923874 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:31:40.923881 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:31:40.923980 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 20:31:40.923994 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:31:40.924002 kernel: kvm [1]: HYP mode not available Jan 13 20:31:40.924013 kernel: Initialise system trusted keyrings Jan 13 20:31:40.924021 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:31:40.924029 kernel: Key type asymmetric registered Jan 13 20:31:40.924062 kernel: Asymmetric key parser 'x509' registered Jan 13 20:31:40.924070 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:31:40.924079 kernel: io scheduler mq-deadline registered Jan 13 20:31:40.924087 kernel: io scheduler kyber registered Jan 13 20:31:40.924094 kernel: io scheduler bfq registered Jan 13 20:31:40.924103 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 20:31:40.924203 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 13 20:31:40.924319 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 13 20:31:40.924391 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.924461 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 13 20:31:40.924528 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 13 20:31:40.924593 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.924672 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 13 20:31:40.924739 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 13 20:31:40.924806 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.924875 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 13 20:31:40.924942 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 13 20:31:40.925012 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.925118 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 13 20:31:40.925187 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 13 20:31:40.925261 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.925340 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 13 20:31:40.925409 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 13 20:31:40.925476 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.925553 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 13 20:31:40.925621 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 13 20:31:40.925687 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.925756 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 13 20:31:40.925822 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 13 20:31:40.925888 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.925902 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 20:31:40.925973 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 13 20:31:40.926162 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 13 20:31:40.926242 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 20:31:40.926289 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:31:40.926300 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:31:40.926308 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:31:40.926402 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Jan 13 20:31:40.926478 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 20:31:40.926550 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 13 20:31:40.926561 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:31:40.926569 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 20:31:40.926639 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 13 20:31:40.926649 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 13 20:31:40.926657 kernel: thunder_xcv, ver 1.0 Jan 13 20:31:40.926668 kernel: thunder_bgx, ver 1.0 Jan 13 20:31:40.926676 kernel: nicpf, ver 1.0 Jan 13 20:31:40.926684 kernel: nicvf, ver 1.0 Jan 13 20:31:40.926763 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:31:40.926826 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:31:40 UTC (1736800300) Jan 13 20:31:40.926836 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:31:40.926844 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:31:40.926853 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:31:40.926863 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:31:40.926871 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:31:40.926879 kernel: Segment Routing with IPv6 Jan 13 20:31:40.926887 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:31:40.926895 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:31:40.926903 kernel: Key type dns_resolver registered Jan 13 20:31:40.926910 kernel: registered taskstats version 1 Jan 13 20:31:40.926918 kernel: Loading compiled-in X.509 certificates Jan 13 20:31:40.926926 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:31:40.926934 kernel: Key type .fscrypt registered Jan 13 20:31:40.926943 kernel: Key type fscrypt-provisioning registered Jan 13 20:31:40.926951 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:31:40.926959 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:31:40.926967 kernel: ima: No architecture policies found Jan 13 20:31:40.926975 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:31:40.926983 kernel: clk: Disabling unused clocks Jan 13 20:31:40.926991 kernel: Freeing unused kernel memory: 39680K Jan 13 20:31:40.926998 kernel: Run /init as init process Jan 13 20:31:40.927008 kernel: with arguments: Jan 13 20:31:40.927016 kernel: /init Jan 13 20:31:40.927023 kernel: with environment: Jan 13 20:31:40.927158 kernel: HOME=/ Jan 13 20:31:40.927168 kernel: TERM=linux Jan 13 20:31:40.927175 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:31:40.927186 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:31:40.927196 systemd[1]: Detected virtualization kvm. Jan 13 20:31:40.927208 systemd[1]: Detected architecture arm64. Jan 13 20:31:40.927216 systemd[1]: Running in initrd. Jan 13 20:31:40.927224 systemd[1]: No hostname configured, using default hostname. Jan 13 20:31:40.927232 systemd[1]: Hostname set to . Jan 13 20:31:40.927241 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:31:40.927249 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:31:40.927271 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:40.927280 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:40.927295 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:31:40.927305 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:31:40.927315 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:31:40.927324 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:31:40.927334 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:31:40.927342 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:31:40.927350 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:40.927360 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:40.927368 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:31:40.927376 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:31:40.927385 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:31:40.927393 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:31:40.927401 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:31:40.927409 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:31:40.927418 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:31:40.927428 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:31:40.927437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:40.927445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:40.927454 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:40.927462 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:31:40.927470 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:31:40.927479 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:31:40.927487 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:31:40.927495 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:31:40.927505 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:31:40.927514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:31:40.927522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:40.927530 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:31:40.927539 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:40.927576 systemd-journald[237]: Collecting audit messages is disabled. Jan 13 20:31:40.927598 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:31:40.927608 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:31:40.927668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:40.927679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:40.927688 systemd-journald[237]: Journal started Jan 13 20:31:40.927709 systemd-journald[237]: Runtime Journal (/run/log/journal/ab200b27f4494ce78face9e1e4ce7a7d) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:31:40.908660 systemd-modules-load[238]: Inserted module 'overlay' Jan 13 20:31:40.929568 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:31:40.933063 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:31:40.934987 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 13 20:31:40.935581 kernel: Bridge firewalling registered Jan 13 20:31:40.935783 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:31:40.938111 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:40.940200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:31:40.954584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:31:40.957338 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:31:40.963596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:40.967355 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:40.969465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:40.976697 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:31:40.981244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:31:40.987781 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:41.011659 dracut-cmdline[272]: dracut-dracut-053 Jan 13 20:31:41.020078 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:31:41.022785 systemd-resolved[273]: Positive Trust Anchors: Jan 13 20:31:41.022852 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:31:41.022888 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:31:41.029154 systemd-resolved[273]: Defaulting to hostname 'linux'. Jan 13 20:31:41.030796 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:31:41.031737 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:41.117131 kernel: SCSI subsystem initialized Jan 13 20:31:41.121080 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:31:41.132083 kernel: iscsi: registered transport (tcp) Jan 13 20:31:41.146091 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:31:41.146181 kernel: QLogic iSCSI HBA Driver Jan 13 20:31:41.197669 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:31:41.207474 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:31:41.228305 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:31:41.228373 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:31:41.229346 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:31:41.284090 kernel: raid6: neonx8 gen() 14540 MB/s Jan 13 20:31:41.301108 kernel: raid6: neonx4 gen() 15057 MB/s Jan 13 20:31:41.318083 kernel: raid6: neonx2 gen() 12785 MB/s Jan 13 20:31:41.335085 kernel: raid6: neonx1 gen() 10374 MB/s Jan 13 20:31:41.352106 kernel: raid6: int64x8 gen() 6577 MB/s Jan 13 20:31:41.369133 kernel: raid6: int64x4 gen() 7151 MB/s Jan 13 20:31:41.386108 kernel: raid6: int64x2 gen() 5655 MB/s Jan 13 20:31:41.403077 kernel: raid6: int64x1 gen() 4949 MB/s Jan 13 20:31:41.403147 kernel: raid6: using algorithm neonx4 gen() 15057 MB/s Jan 13 20:31:41.420107 kernel: raid6: .... xor() 10601 MB/s, rmw enabled Jan 13 20:31:41.420190 kernel: raid6: using neon recovery algorithm Jan 13 20:31:41.425064 kernel: xor: measuring software checksum speed Jan 13 20:31:41.425107 kernel: 8regs : 19793 MB/sec Jan 13 20:31:41.426211 kernel: 32regs : 17842 MB/sec Jan 13 20:31:41.426278 kernel: arm64_neon : 27114 MB/sec Jan 13 20:31:41.426304 kernel: xor: using function: arm64_neon (27114 MB/sec) Jan 13 20:31:41.478101 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:31:41.493971 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:31:41.507584 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:41.521768 systemd-udevd[456]: Using default interface naming scheme 'v255'. Jan 13 20:31:41.525143 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:41.536283 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:31:41.552533 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 13 20:31:41.591086 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:31:41.599220 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:31:41.651826 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:41.658371 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:31:41.690217 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:31:41.692208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:31:41.692830 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:41.693449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:31:41.701300 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:31:41.717615 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:31:41.787401 kernel: ACPI: bus type USB registered Jan 13 20:31:41.787516 kernel: usbcore: registered new interface driver usbfs Jan 13 20:31:41.787547 kernel: usbcore: registered new interface driver hub Jan 13 20:31:41.788620 kernel: usbcore: registered new device driver usb Jan 13 20:31:41.792935 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:31:41.793820 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:41.795999 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:41.798121 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:41.798329 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:41.806127 kernel: scsi host0: Virtio SCSI HBA Jan 13 20:31:41.806333 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 20:31:41.806364 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 13 20:31:41.799018 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:41.810675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:41.836578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:41.840866 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:31:41.846461 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 20:31:41.846593 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 20:31:41.846679 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 20:31:41.846764 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 20:31:41.846843 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 20:31:41.846922 kernel: hub 1-0:1.0: USB hub found Jan 13 20:31:41.847021 kernel: hub 1-0:1.0: 4 ports detected Jan 13 20:31:41.847142 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 20:31:41.847239 kernel: hub 2-0:1.0: USB hub found Jan 13 20:31:41.847351 kernel: hub 2-0:1.0: 4 ports detected Jan 13 20:31:41.848236 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 13 20:31:41.854155 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 13 20:31:41.854824 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 20:31:41.854838 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 13 20:31:41.849229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:31:41.871420 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 13 20:31:41.880238 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 13 20:31:41.880424 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 13 20:31:41.880510 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 13 20:31:41.880589 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 20:31:41.880668 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:31:41.880688 kernel: GPT:17805311 != 80003071 Jan 13 20:31:41.880698 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:31:41.880708 kernel: GPT:17805311 != 80003071 Jan 13 20:31:41.880716 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:31:41.880726 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:31:41.880736 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 13 20:31:41.880392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:41.927983 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (501) Jan 13 20:31:41.930056 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (513) Jan 13 20:31:41.938487 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 13 20:31:41.944549 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 13 20:31:41.949697 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:31:41.956727 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 13 20:31:41.958164 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 13 20:31:41.967223 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:31:41.976111 disk-uuid[572]: Primary Header is updated. Jan 13 20:31:41.976111 disk-uuid[572]: Secondary Entries is updated. Jan 13 20:31:41.976111 disk-uuid[572]: Secondary Header is updated. Jan 13 20:31:41.981098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:31:41.987076 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:31:42.085861 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 20:31:42.329317 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 20:31:42.467664 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 20:31:42.467778 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 20:31:42.469060 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 20:31:42.522232 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 20:31:42.522964 kernel: usbcore: registered new interface driver usbhid Jan 13 20:31:42.522996 kernel: usbhid: USB HID core driver Jan 13 20:31:42.992147 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 20:31:42.992954 disk-uuid[573]: The operation has completed successfully. Jan 13 20:31:43.040762 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:31:43.040882 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:31:43.075432 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:31:43.080554 sh[587]: Success Jan 13 20:31:43.095052 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:31:43.166862 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:31:43.170235 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:31:43.173070 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:31:43.189674 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:31:43.189733 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:31:43.189744 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:31:43.190562 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:31:43.191096 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:31:43.199108 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 20:31:43.201862 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:31:43.202570 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:31:43.209330 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:31:43.213116 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:31:43.224306 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:31:43.224370 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:31:43.224386 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:31:43.229069 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:31:43.229148 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:31:43.239819 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:31:43.241053 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:31:43.246964 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:31:43.252237 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:31:43.344153 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:31:43.348238 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:31:43.352927 ignition[678]: Ignition 2.20.0 Jan 13 20:31:43.352937 ignition[678]: Stage: fetch-offline Jan 13 20:31:43.352975 ignition[678]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:43.355468 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:31:43.352984 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:31:43.353157 ignition[678]: parsed url from cmdline: "" Jan 13 20:31:43.353160 ignition[678]: no config URL provided Jan 13 20:31:43.353165 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:31:43.353172 ignition[678]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:31:43.353178 ignition[678]: failed to fetch config: resource requires networking Jan 13 20:31:43.353533 ignition[678]: Ignition finished successfully Jan 13 20:31:43.375508 systemd-networkd[775]: lo: Link UP Jan 13 20:31:43.375520 systemd-networkd[775]: lo: Gained carrier Jan 13 20:31:43.377295 systemd-networkd[775]: Enumeration completed Jan 13 20:31:43.377752 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:43.377755 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:43.379535 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:31:43.379611 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:43.379614 systemd-networkd[775]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:43.380283 systemd-networkd[775]: eth0: Link UP Jan 13 20:31:43.380287 systemd-networkd[775]: eth0: Gained carrier Jan 13 20:31:43.380313 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:43.381016 systemd[1]: Reached target network.target - Network. Jan 13 20:31:43.385603 systemd-networkd[775]: eth1: Link UP Jan 13 20:31:43.385606 systemd-networkd[775]: eth1: Gained carrier Jan 13 20:31:43.385617 systemd-networkd[775]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:43.387208 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 20:31:43.399341 ignition[778]: Ignition 2.20.0 Jan 13 20:31:43.399352 ignition[778]: Stage: fetch Jan 13 20:31:43.399540 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:43.399550 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:31:43.399642 ignition[778]: parsed url from cmdline: "" Jan 13 20:31:43.399645 ignition[778]: no config URL provided Jan 13 20:31:43.399649 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:31:43.399656 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:31:43.399740 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 13 20:31:43.400501 ignition[778]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 13 20:31:43.416263 systemd-networkd[775]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:31:43.442135 systemd-networkd[775]: eth0: DHCPv4 address 138.199.152.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:31:43.601069 ignition[778]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 13 20:31:43.606102 ignition[778]: GET result: OK Jan 13 20:31:43.606238 ignition[778]: parsing config with SHA512: cca6ce7a0a61f7915071735f96dc857c12bcfbe03f725d99a0d0cd5af72faa348a0fa312ae8645ef75004193b8b525030e7f05d437b5da32f5d9b9bce79022ad Jan 13 20:31:43.612497 unknown[778]: fetched base config from "system" Jan 13 20:31:43.612507 unknown[778]: fetched base config from "system" Jan 13 20:31:43.612896 ignition[778]: fetch: fetch complete Jan 13 20:31:43.612513 unknown[778]: fetched user config from "hetzner" Jan 13 20:31:43.612901 ignition[778]: fetch: fetch passed Jan 13 20:31:43.615444 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 20:31:43.612943 ignition[778]: Ignition finished successfully Jan 13 20:31:43.626333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:31:43.639417 ignition[786]: Ignition 2.20.0 Jan 13 20:31:43.639427 ignition[786]: Stage: kargs Jan 13 20:31:43.639607 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:43.639616 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:31:43.640614 ignition[786]: kargs: kargs passed Jan 13 20:31:43.640665 ignition[786]: Ignition finished successfully Jan 13 20:31:43.645092 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:31:43.652384 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:31:43.663329 ignition[792]: Ignition 2.20.0 Jan 13 20:31:43.663340 ignition[792]: Stage: disks Jan 13 20:31:43.663521 ignition[792]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:43.663531 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:31:43.666198 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:31:43.664539 ignition[792]: disks: disks passed Jan 13 20:31:43.668267 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:31:43.664595 ignition[792]: Ignition finished successfully Jan 13 20:31:43.669307 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:31:43.671193 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:31:43.672631 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:31:43.673499 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:31:43.679299 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:31:43.698161 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 20:31:43.703924 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:31:43.711172 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:31:43.765325 kernel: EXT4-fs (sda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:31:43.765732 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:31:43.767770 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:31:43.780322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:31:43.784238 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:31:43.788379 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 13 20:31:43.790162 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:31:43.790205 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:31:43.800052 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (808) Jan 13 20:31:43.801336 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:31:43.801375 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:31:43.801387 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:31:43.805462 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:31:43.805520 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:31:43.812411 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:31:43.814319 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:31:43.823377 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:31:43.872876 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:31:43.875214 coreos-metadata[810]: Jan 13 20:31:43.874 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 13 20:31:43.878591 coreos-metadata[810]: Jan 13 20:31:43.878 INFO Fetch successful Jan 13 20:31:43.879364 coreos-metadata[810]: Jan 13 20:31:43.879 INFO wrote hostname ci-4152-2-0-6-5d4da4afb6 to /sysroot/etc/hostname Jan 13 20:31:43.882980 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:31:43.885963 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:31:43.889805 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:31:43.894980 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:31:44.005324 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:31:44.012215 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:31:44.016743 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:31:44.021087 kernel: BTRFS info (device sda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:31:44.042110 ignition[925]: INFO : Ignition 2.20.0 Jan 13 20:31:44.042110 ignition[925]: INFO : Stage: mount Jan 13 20:31:44.043103 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:44.043103 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:31:44.044816 ignition[925]: INFO : mount: mount passed Jan 13 20:31:44.044816 ignition[925]: INFO : Ignition finished successfully Jan 13 20:31:44.046288 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:31:44.051233 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:31:44.052927 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:31:44.189307 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:31:44.197354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:31:44.208180 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (937) Jan 13 20:31:44.208264 kernel: BTRFS info (device sda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:31:44.209174 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:31:44.209209 kernel: BTRFS info (device sda6): using free space tree Jan 13 20:31:44.212304 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 20:31:44.212363 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 20:31:44.214742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:31:44.236277 ignition[954]: INFO : Ignition 2.20.0 Jan 13 20:31:44.236277 ignition[954]: INFO : Stage: files Jan 13 20:31:44.237348 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:44.237348 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:31:44.238839 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:31:44.240064 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:31:44.240064 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:31:44.243774 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:31:44.244837 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:31:44.244837 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:31:44.244534 unknown[954]: wrote ssh authorized keys file for user: core Jan 13 20:31:44.248095 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:31:44.248095 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 20:31:44.248095 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:31:44.248095 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 13 20:31:44.367188 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:31:45.160925 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:31:45.177843 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jan 13 20:31:45.165246 systemd-networkd[775]: eth1: Gained IPv6LL Jan 13 20:31:45.415276 systemd-networkd[775]: eth0: Gained IPv6LL Jan 13 20:31:45.734339 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 20:31:46.105274 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jan 13 20:31:46.105274 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 20:31:46.108837 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:31:46.108837 ignition[954]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:31:46.108837 ignition[954]: INFO : files: files passed Jan 13 20:31:46.108837 ignition[954]: INFO : Ignition finished successfully Jan 13 20:31:46.110834 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:31:46.121309 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:31:46.127331 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:31:46.131422 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:31:46.133748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:31:46.152901 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:46.154181 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:46.155580 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:31:46.158328 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:31:46.160577 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:31:46.166414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:31:46.195431 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:31:46.195590 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:31:46.197128 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:31:46.198003 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:31:46.199179 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:31:46.211445 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:31:46.228996 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:31:46.242382 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:31:46.253084 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:46.254813 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:46.255634 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:31:46.257844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:31:46.257980 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:31:46.259904 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:31:46.260777 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:31:46.262312 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:31:46.264521 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:31:46.265999 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:31:46.267184 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:31:46.268394 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:31:46.269659 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:31:46.270850 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:31:46.271820 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:31:46.272706 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:31:46.272835 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:31:46.274087 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:46.274738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:46.275770 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:31:46.275842 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:46.276904 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:31:46.277025 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:31:46.278575 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:31:46.278694 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:31:46.279989 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:31:46.280096 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:31:46.280990 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 13 20:31:46.281105 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 13 20:31:46.291580 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:31:46.297354 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:31:46.297942 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:31:46.298161 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:46.302701 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:31:46.302916 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:31:46.310670 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:31:46.312121 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:31:46.316993 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:31:46.320400 ignition[1006]: INFO : Ignition 2.20.0 Jan 13 20:31:46.320653 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:31:46.320770 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:31:46.323993 ignition[1006]: INFO : Stage: umount Jan 13 20:31:46.323993 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:31:46.323993 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 13 20:31:46.323993 ignition[1006]: INFO : umount: umount passed Jan 13 20:31:46.323993 ignition[1006]: INFO : Ignition finished successfully Jan 13 20:31:46.324785 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:31:46.324905 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:31:46.326586 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:31:46.326684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:31:46.327460 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:31:46.327502 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:31:46.328014 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 20:31:46.328064 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 20:31:46.328942 systemd[1]: Stopped target network.target - Network. Jan 13 20:31:46.329837 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:31:46.329886 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:31:46.330828 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:31:46.331634 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:31:46.335116 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:46.336089 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:31:46.336992 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:31:46.338162 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:31:46.338206 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:31:46.339283 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:31:46.339331 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:31:46.340387 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:31:46.340439 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:31:46.341223 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:31:46.341273 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:31:46.342079 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:31:46.342114 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:31:46.343445 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:31:46.344117 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:31:46.347148 systemd-networkd[775]: eth0: DHCPv6 lease lost Jan 13 20:31:46.351161 systemd-networkd[775]: eth1: DHCPv6 lease lost Jan 13 20:31:46.353908 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:31:46.354712 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:31:46.356329 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:31:46.356472 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:31:46.358424 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:31:46.358487 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:46.364211 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:31:46.364783 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:31:46.364876 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:31:46.366942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:31:46.366996 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:46.367869 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:31:46.367913 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:46.369289 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:31:46.369333 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:46.370656 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:46.382714 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:31:46.382838 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:31:46.390056 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:31:46.390376 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:46.392985 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:31:46.393051 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:46.394831 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:31:46.394866 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:46.396582 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:31:46.396628 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:31:46.399020 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:31:46.399095 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:31:46.400716 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:31:46.400758 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:31:46.410589 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:31:46.413553 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:31:46.413635 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:46.415145 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:31:46.415199 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:31:46.417417 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:31:46.417466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:46.418214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:46.418287 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:46.420323 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:31:46.422070 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:31:46.423570 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:31:46.429343 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:31:46.441413 systemd[1]: Switching root. Jan 13 20:31:46.464067 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 13 20:31:46.464146 systemd-journald[237]: Journal stopped Jan 13 20:31:47.478835 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:31:47.478910 kernel: SELinux: policy capability open_perms=1 Jan 13 20:31:47.478926 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:31:47.478935 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:31:47.478945 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:31:47.478955 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:31:47.478964 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:31:47.478974 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:31:47.478983 kernel: audit: type=1403 audit(1736800306.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:31:47.478998 systemd[1]: Successfully loaded SELinux policy in 34.259ms. Jan 13 20:31:47.479022 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.031ms. Jan 13 20:31:47.479051 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:31:47.479063 systemd[1]: Detected virtualization kvm. Jan 13 20:31:47.479074 systemd[1]: Detected architecture arm64. Jan 13 20:31:47.479097 systemd[1]: Detected first boot. Jan 13 20:31:47.479113 systemd[1]: Hostname set to . Jan 13 20:31:47.479123 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:31:47.479134 zram_generator::config[1066]: No configuration found. Jan 13 20:31:47.479146 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:31:47.479159 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:31:47.479170 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 20:31:47.479181 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:31:47.479192 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:31:47.479202 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:31:47.479213 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:31:47.479234 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:31:47.479246 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:31:47.479258 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:31:47.479269 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:31:47.479279 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:31:47.479289 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:31:47.479300 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:31:47.479310 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:31:47.479321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:31:47.479332 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:31:47.479342 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:31:47.479354 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:31:47.479366 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:31:47.479376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:31:47.479389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:31:47.479400 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:31:47.479410 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:31:47.479421 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:31:47.479432 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:31:47.479443 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:31:47.479454 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:31:47.479464 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:31:47.479475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:31:47.479485 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:31:47.479495 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:31:47.479506 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:31:47.479517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:31:47.479529 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:31:47.479543 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:31:47.479553 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:31:47.479563 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:31:47.479577 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:31:47.479589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:47.479601 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:31:47.479611 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:31:47.479622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:47.479633 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:31:47.479643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:47.479653 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:31:47.479664 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:47.479675 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:31:47.479687 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 20:31:47.479698 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 20:31:47.479708 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:31:47.479719 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:31:47.479729 kernel: loop: module loaded Jan 13 20:31:47.479739 kernel: fuse: init (API version 7.39) Jan 13 20:31:47.479748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:31:47.479759 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:31:47.479772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:31:47.479782 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:31:47.479793 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:31:47.479802 kernel: ACPI: bus type drm_connector registered Jan 13 20:31:47.479812 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:31:47.479822 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:31:47.479833 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:31:47.479843 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:31:47.479853 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:31:47.479865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:31:47.479910 systemd-journald[1158]: Collecting audit messages is disabled. Jan 13 20:31:47.479940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:31:47.479951 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:31:47.479961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:47.479973 systemd-journald[1158]: Journal started Jan 13 20:31:47.479995 systemd-journald[1158]: Runtime Journal (/run/log/journal/ab200b27f4494ce78face9e1e4ce7a7d) is 8.0M, max 76.5M, 68.5M free. Jan 13 20:31:47.482324 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:47.485420 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:31:47.484610 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:31:47.484793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:31:47.486146 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:47.486412 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:47.487645 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:31:47.487897 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:31:47.488872 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:47.489337 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:47.490414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:31:47.491546 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:31:47.492606 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:31:47.506408 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:31:47.512285 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:31:47.518237 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:31:47.521363 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:31:47.536813 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:31:47.548219 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:31:47.548878 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:47.558269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:31:47.560176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:47.570475 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:31:47.574915 systemd-journald[1158]: Time spent on flushing to /var/log/journal/ab200b27f4494ce78face9e1e4ce7a7d is 47.320ms for 1111 entries. Jan 13 20:31:47.574915 systemd-journald[1158]: System Journal (/var/log/journal/ab200b27f4494ce78face9e1e4ce7a7d) is 8.0M, max 584.8M, 576.8M free. Jan 13 20:31:47.648281 systemd-journald[1158]: Received client request to flush runtime journal. Jan 13 20:31:47.589242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:31:47.591310 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:31:47.592001 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:31:47.600538 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:31:47.612268 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:31:47.613379 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:31:47.615215 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:31:47.648383 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:31:47.654561 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:31:47.656250 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:31:47.660834 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 13 20:31:47.660850 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 13 20:31:47.667343 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:31:47.679349 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:31:47.708203 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:31:47.714606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:31:47.736984 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 13 20:31:47.737006 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 13 20:31:47.741537 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:31:48.150984 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:31:48.156377 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:31:48.182019 systemd-udevd[1231]: Using default interface naming scheme 'v255'. Jan 13 20:31:48.206213 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:31:48.219831 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:31:48.236699 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:31:48.292273 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 13 20:31:48.326541 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:31:48.413645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:48.419213 systemd-networkd[1238]: lo: Link UP Jan 13 20:31:48.419234 systemd-networkd[1238]: lo: Gained carrier Jan 13 20:31:48.420898 systemd-networkd[1238]: Enumeration completed Jan 13 20:31:48.423215 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:48.423260 systemd-networkd[1238]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:48.423983 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:48.423986 systemd-networkd[1238]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:31:48.424537 systemd-networkd[1238]: eth0: Link UP Jan 13 20:31:48.424540 systemd-networkd[1238]: eth0: Gained carrier Jan 13 20:31:48.424555 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:48.426691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:48.428868 systemd-networkd[1238]: eth1: Link UP Jan 13 20:31:48.428875 systemd-networkd[1238]: eth1: Gained carrier Jan 13 20:31:48.428894 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:48.445349 systemd-networkd[1238]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:48.445656 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:31:48.447275 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:48.460324 systemd-networkd[1238]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:31:48.466680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:48.467506 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:31:48.467554 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:31:48.473445 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:31:48.478081 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 20:31:48.479942 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:48.484116 systemd-networkd[1238]: eth0: DHCPv4 address 138.199.152.196/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 13 20:31:48.485373 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:48.487618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:48.487793 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:48.497981 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:48.501339 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:48.515972 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:31:48.518710 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:48.518767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:48.522818 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1251) Jan 13 20:31:48.555340 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 13 20:31:48.555433 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 20:31:48.555451 kernel: [drm] features: -context_init Jan 13 20:31:48.558085 kernel: [drm] number of scanouts: 1 Jan 13 20:31:48.558178 kernel: [drm] number of cap sets: 0 Jan 13 20:31:48.559048 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Jan 13 20:31:48.564258 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 20:31:48.573102 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:48.578095 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 20:31:48.594984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 13 20:31:48.596056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:31:48.596453 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:48.603366 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:31:48.682025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:31:48.695154 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:31:48.709199 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:31:48.721058 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:31:48.747898 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:31:48.749542 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:31:48.756349 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:31:48.761554 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:31:48.788616 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:31:48.790611 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:31:48.792359 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:31:48.792513 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:31:48.793698 systemd[1]: Reached target machines.target - Containers. Jan 13 20:31:48.798101 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:31:48.807533 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:31:48.813504 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:31:48.814506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:48.824393 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:31:48.830365 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:31:48.836302 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:31:48.839346 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:31:48.851923 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:31:48.864064 kernel: loop0: detected capacity change from 0 to 8 Jan 13 20:31:48.873564 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:31:48.877945 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:31:48.874480 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:31:48.900665 kernel: loop1: detected capacity change from 0 to 194512 Jan 13 20:31:48.941270 kernel: loop2: detected capacity change from 0 to 113536 Jan 13 20:31:48.979122 kernel: loop3: detected capacity change from 0 to 116808 Jan 13 20:31:49.019339 kernel: loop4: detected capacity change from 0 to 8 Jan 13 20:31:49.021075 kernel: loop5: detected capacity change from 0 to 194512 Jan 13 20:31:49.036102 kernel: loop6: detected capacity change from 0 to 113536 Jan 13 20:31:49.051405 kernel: loop7: detected capacity change from 0 to 116808 Jan 13 20:31:49.060100 (sd-merge)[1329]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 13 20:31:49.060608 (sd-merge)[1329]: Merged extensions into '/usr'. Jan 13 20:31:49.067698 systemd[1]: Reloading requested from client PID 1314 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:31:49.067718 systemd[1]: Reloading... Jan 13 20:31:49.145196 zram_generator::config[1355]: No configuration found. Jan 13 20:31:49.282710 ldconfig[1310]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:31:49.283158 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:31:49.346378 systemd[1]: Reloading finished in 277 ms. Jan 13 20:31:49.365881 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:31:49.367051 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:31:49.377456 systemd[1]: Starting ensure-sysext.service... Jan 13 20:31:49.381278 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:31:49.388275 systemd[1]: Reloading requested from client PID 1402 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:31:49.388301 systemd[1]: Reloading... Jan 13 20:31:49.421897 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:31:49.422599 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:31:49.423358 systemd-tmpfiles[1403]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:31:49.423668 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 13 20:31:49.423799 systemd-tmpfiles[1403]: ACLs are not supported, ignoring. Jan 13 20:31:49.430008 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:31:49.430234 systemd-tmpfiles[1403]: Skipping /boot Jan 13 20:31:49.439497 systemd-tmpfiles[1403]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:31:49.439621 systemd-tmpfiles[1403]: Skipping /boot Jan 13 20:31:49.483071 zram_generator::config[1432]: No configuration found. Jan 13 20:31:49.592653 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:31:49.659384 systemd[1]: Reloading finished in 270 ms. Jan 13 20:31:49.676496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:31:49.688565 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:31:49.705389 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:31:49.711347 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:31:49.727434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:31:49.733412 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:31:49.741326 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:49.751346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:49.759659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:49.776390 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:49.779233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:49.782072 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:31:49.796590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:49.796822 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:49.802667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:49.802853 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:49.811923 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:49.824289 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:31:49.829352 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:31:49.833642 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:49.833833 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:49.836727 augenrules[1513]: No rules Jan 13 20:31:49.838000 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:49.844726 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:31:49.845314 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:31:49.845565 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:31:49.848833 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:31:49.859983 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:31:49.868159 systemd-resolved[1485]: Positive Trust Anchors: Jan 13 20:31:49.868557 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:31:49.868635 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:31:49.874167 systemd-resolved[1485]: Using system hostname 'ci-4152-2-0-6-5d4da4afb6'. Jan 13 20:31:49.874289 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:31:49.877390 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:31:49.878706 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:31:49.886302 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:31:49.898415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:31:49.907345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:31:49.909689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:31:49.909875 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:31:49.910546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:31:49.911309 augenrules[1528]: /sbin/augenrules: No change Jan 13 20:31:49.912169 systemd[1]: Finished ensure-sysext.service. Jan 13 20:31:49.913084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:31:49.913353 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:31:49.914309 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:31:49.920019 augenrules[1552]: No rules Jan 13 20:31:49.920656 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:31:49.922502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:31:49.922890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:31:49.924657 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:31:49.925090 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:31:49.926745 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:31:49.928284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:31:49.935783 systemd[1]: Reached target network.target - Network. Jan 13 20:31:49.936438 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:31:49.937078 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:31:49.937139 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:31:49.944360 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:31:49.986881 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:31:49.989631 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:31:49.991087 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:31:49.992057 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:31:49.994075 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:31:49.995834 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:31:49.996021 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:31:49.997375 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:31:49.998674 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:31:49.999584 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:31:50.000438 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:31:50.002282 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:31:50.004536 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:31:50.006866 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:31:50.009681 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:31:50.010866 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:31:50.011960 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:31:50.013511 systemd[1]: System is tainted: cgroupsv1 Jan 13 20:31:50.013567 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:31:50.013596 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:31:50.015464 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:31:50.020266 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 20:31:50.022234 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:31:50.034361 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:31:50.040486 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:31:50.043941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:31:50.048583 jq[1574]: false Jan 13 20:31:50.052680 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:31:50.058514 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 20:31:50.073710 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 13 20:31:50.078275 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:31:50.089748 coreos-metadata[1571]: Jan 13 20:31:50.089 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 13 20:31:50.091406 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:31:50.101904 coreos-metadata[1571]: Jan 13 20:31:50.096 INFO Fetch successful Jan 13 20:31:50.101904 coreos-metadata[1571]: Jan 13 20:31:50.096 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 13 20:31:50.101904 coreos-metadata[1571]: Jan 13 20:31:50.096 INFO Fetch successful Jan 13 20:31:50.100336 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:31:50.103851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:31:50.107389 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:31:50.114608 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:31:50.123512 extend-filesystems[1577]: Found loop4 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found loop5 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found loop6 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found loop7 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda1 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda2 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda3 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found usr Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda4 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda6 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda7 Jan 13 20:31:50.123512 extend-filesystems[1577]: Found sda9 Jan 13 20:31:50.123512 extend-filesystems[1577]: Checking size of /dev/sda9 Jan 13 20:31:50.124327 dbus-daemon[1572]: [system] SELinux support is enabled Jan 13 20:31:50.128935 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:31:50.150429 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:31:50.150692 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:31:50.151030 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:31:50.151329 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:31:50.158427 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:31:50.158675 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:31:50.168111 extend-filesystems[1577]: Resized partition /dev/sda9 Jan 13 20:31:50.179054 extend-filesystems[1613]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:31:50.183123 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Jan 13 20:31:50.185778 (ntainerd)[1614]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:31:50.189708 jq[1596]: true Jan 13 20:31:50.215289 systemd-networkd[1238]: eth1: Gained IPv6LL Jan 13 20:31:50.229390 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:31:50.233697 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:31:50.242196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:31:50.250761 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:31:50.255527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:31:50.255570 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:31:50.261850 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:31:50.263621 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:31:50.281097 jq[1621]: true Jan 13 20:31:50.284087 update_engine[1593]: I20250113 20:31:50.281656 1593 main.cc:92] Flatcar Update Engine starting Jan 13 20:31:50.298328 tar[1605]: linux-arm64/helm Jan 13 20:31:50.318921 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1246) Jan 13 20:31:50.297704 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:31:50.320134 update_engine[1593]: I20250113 20:31:50.301431 1593 update_check_scheduler.cc:74] Next update check in 3m50s Jan 13 20:31:50.319412 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:31:50.333320 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:31:50.343367 systemd-logind[1591]: New seat seat0. Jan 13 20:31:50.346370 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:31:50.008093 systemd-journald[1158]: Time jumped backwards, rotating. Jan 13 20:31:50.008146 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Jan 13 20:31:50.346391 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 20:31:50.018320 extend-filesystems[1613]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 20:31:50.018320 extend-filesystems[1613]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 13 20:31:50.018320 extend-filesystems[1613]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Jan 13 20:31:49.933662 systemd-resolved[1485]: Clock change detected. Flushing caches. Jan 13 20:31:50.063992 extend-filesystems[1577]: Resized filesystem in /dev/sda9 Jan 13 20:31:50.063992 extend-filesystems[1577]: Found sr0 Jan 13 20:31:50.076276 bash[1668]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:31:49.933766 systemd-timesyncd[1566]: Contacted time server 5.9.193.27:123 (0.flatcar.pool.ntp.org). Jan 13 20:31:49.934078 systemd-timesyncd[1566]: Initial clock synchronization to Mon 2025-01-13 20:31:49.933422 UTC. Jan 13 20:31:49.946804 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:31:49.983731 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 20:31:49.984715 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:31:49.987659 systemd-networkd[1238]: eth0: Gained IPv6LL Jan 13 20:31:50.011916 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:31:50.012190 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:31:50.039182 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:31:50.066916 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:31:50.083762 systemd[1]: Starting sshkeys.service... Jan 13 20:31:50.118865 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 20:31:50.135790 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 20:31:50.181002 coreos-metadata[1680]: Jan 13 20:31:50.180 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 13 20:31:50.181002 coreos-metadata[1680]: Jan 13 20:31:50.180 INFO Fetch successful Jan 13 20:31:50.189353 unknown[1680]: wrote ssh authorized keys file for user: core Jan 13 20:31:50.196910 containerd[1614]: time="2025-01-13T20:31:50.196810841Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:31:50.214136 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:31:50.226170 update-ssh-keys[1689]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:31:50.227450 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 20:31:50.235029 systemd[1]: Finished sshkeys.service. Jan 13 20:31:50.256248 containerd[1614]: time="2025-01-13T20:31:50.254543681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259269 containerd[1614]: time="2025-01-13T20:31:50.258877881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259269 containerd[1614]: time="2025-01-13T20:31:50.258924081Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:31:50.259269 containerd[1614]: time="2025-01-13T20:31:50.258942041Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:31:50.259269 containerd[1614]: time="2025-01-13T20:31:50.259101081Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:31:50.259269 containerd[1614]: time="2025-01-13T20:31:50.259118121Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259269 containerd[1614]: time="2025-01-13T20:31:50.259182161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259269 containerd[1614]: time="2025-01-13T20:31:50.259193721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259741 containerd[1614]: time="2025-01-13T20:31:50.259713281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259820 containerd[1614]: time="2025-01-13T20:31:50.259805881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259871 containerd[1614]: time="2025-01-13T20:31:50.259858401Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:50.259912 containerd[1614]: time="2025-01-13T20:31:50.259901001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:50.260031 containerd[1614]: time="2025-01-13T20:31:50.260016801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:50.262871 containerd[1614]: time="2025-01-13T20:31:50.262258961Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:31:50.262871 containerd[1614]: time="2025-01-13T20:31:50.262540561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:31:50.262871 containerd[1614]: time="2025-01-13T20:31:50.262567481Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:31:50.262871 containerd[1614]: time="2025-01-13T20:31:50.262672801Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:31:50.262871 containerd[1614]: time="2025-01-13T20:31:50.262716881Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:31:50.268496 containerd[1614]: time="2025-01-13T20:31:50.268451401Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:31:50.268683 containerd[1614]: time="2025-01-13T20:31:50.268668561Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:31:50.268792 containerd[1614]: time="2025-01-13T20:31:50.268778961Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:31:50.269156 containerd[1614]: time="2025-01-13T20:31:50.268836281Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:31:50.269156 containerd[1614]: time="2025-01-13T20:31:50.268857721Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:31:50.269156 containerd[1614]: time="2025-01-13T20:31:50.269040281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:31:50.270114 containerd[1614]: time="2025-01-13T20:31:50.270084361Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271372401Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271419321Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271438201Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271455241Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271468681Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271481881Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271497961Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271514041Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271528121Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271540001Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271553921Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271576241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271602921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272245 containerd[1614]: time="2025-01-13T20:31:50.271621081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271634721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271647601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271661841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271674041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271687001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271699641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271714961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271727361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271747761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271760281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271774681Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271798801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271812281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272625 containerd[1614]: time="2025-01-13T20:31:50.271823241Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.271998801Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.272018281Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.272029641Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.272041601Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.272050361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.272062281Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.272072041Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:31:50.272833 containerd[1614]: time="2025-01-13T20:31:50.272084921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:31:50.277100 containerd[1614]: time="2025-01-13T20:31:50.273581201Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:31:50.277100 containerd[1614]: time="2025-01-13T20:31:50.273645841Z" level=info msg="Connect containerd service" Jan 13 20:31:50.277100 containerd[1614]: time="2025-01-13T20:31:50.273688041Z" level=info msg="using legacy CRI server" Jan 13 20:31:50.277100 containerd[1614]: time="2025-01-13T20:31:50.273695561Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:31:50.277100 containerd[1614]: time="2025-01-13T20:31:50.273936481Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:31:50.277100 containerd[1614]: time="2025-01-13T20:31:50.276621561Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:31:50.277594 containerd[1614]: time="2025-01-13T20:31:50.277569681Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:31:50.277690 containerd[1614]: time="2025-01-13T20:31:50.277676601Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:31:50.278798 containerd[1614]: time="2025-01-13T20:31:50.278524321Z" level=info msg="Start subscribing containerd event" Jan 13 20:31:50.278902 containerd[1614]: time="2025-01-13T20:31:50.278888641Z" level=info msg="Start recovering state" Jan 13 20:31:50.279335 containerd[1614]: time="2025-01-13T20:31:50.279317241Z" level=info msg="Start event monitor" Jan 13 20:31:50.287898 containerd[1614]: time="2025-01-13T20:31:50.285129961Z" level=info msg="Start snapshots syncer" Jan 13 20:31:50.292032 containerd[1614]: time="2025-01-13T20:31:50.288742721Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:31:50.292032 containerd[1614]: time="2025-01-13T20:31:50.291239521Z" level=info msg="Start streaming server" Jan 13 20:31:50.292032 containerd[1614]: time="2025-01-13T20:31:50.291619601Z" level=info msg="containerd successfully booted in 0.096559s" Jan 13 20:31:50.291832 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:31:50.448685 sshd_keygen[1620]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:31:50.472236 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:31:50.484355 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:31:50.508359 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:31:50.508658 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:31:50.517636 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:31:50.549845 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:31:50.561576 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:31:50.565913 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:31:50.570954 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:31:50.628748 tar[1605]: linux-arm64/LICENSE Jan 13 20:31:50.628748 tar[1605]: linux-arm64/README.md Jan 13 20:31:50.642710 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 20:31:50.938493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:31:50.940100 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:31:50.942356 systemd[1]: Startup finished in 6.796s (kernel) + 4.687s (userspace) = 11.484s. Jan 13 20:31:50.952768 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:31:51.566835 kubelet[1734]: E0113 20:31:51.566747 1734 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:31:51.571154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:31:51.571580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:01.822954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 20:32:01.834566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:01.945574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:01.957988 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:02.017882 kubelet[1759]: E0113 20:32:02.017796 1759 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:02.023402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:02.024091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:12.273774 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 20:32:12.280488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:12.397436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:12.397939 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:12.459929 kubelet[1780]: E0113 20:32:12.459876 1780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:12.463484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:12.463684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:22.479114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 20:32:22.487559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:22.604501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:22.609000 (kubelet)[1801]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:22.665781 kubelet[1801]: E0113 20:32:22.665651 1801 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:22.669789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:22.669968 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:32.728707 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 13 20:32:32.737593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:32.848687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:32.854607 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:32.916683 kubelet[1823]: E0113 20:32:32.916612 1823 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:32.919589 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:32.919774 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:35.467956 update_engine[1593]: I20250113 20:32:35.467304 1593 update_attempter.cc:509] Updating boot flags... Jan 13 20:32:35.524379 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1842) Jan 13 20:32:35.589491 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1844) Jan 13 20:32:35.640342 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1844) Jan 13 20:32:42.979298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 13 20:32:42.986440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:43.126596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:43.127884 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:43.181457 kubelet[1866]: E0113 20:32:43.181388 1866 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:43.184322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:43.184471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:32:53.228891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 13 20:32:53.235549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:32:53.357498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:32:53.361959 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:32:53.413399 kubelet[1888]: E0113 20:32:53.413335 1888 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:32:53.418431 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:32:53.418591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:03.479024 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 13 20:33:03.495621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:03.603530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:03.614898 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:03.669579 kubelet[1909]: E0113 20:33:03.669506 1909 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:03.672439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:03.672614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:13.729472 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 13 20:33:13.739491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:13.858726 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:13.863130 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:13.910051 kubelet[1930]: E0113 20:33:13.909977 1930 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:13.913601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:13.914195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:23.978735 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 13 20:33:23.986587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:24.107574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:24.125959 (kubelet)[1952]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:24.179072 kubelet[1952]: E0113 20:33:24.178974 1952 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:24.184680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:24.184915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:34.229081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 13 20:33:34.241836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:34.383598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:34.385245 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:34.437358 kubelet[1973]: E0113 20:33:34.437286 1973 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:34.440556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:34.440747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:42.199827 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:33:42.208742 systemd[1]: Started sshd@0-138.199.152.196:22-147.75.109.163:49590.service - OpenSSH per-connection server daemon (147.75.109.163:49590). Jan 13 20:33:43.207033 sshd[1983]: Accepted publickey for core from 147.75.109.163 port 49590 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:33:43.208419 sshd-session[1983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:43.223294 systemd-logind[1591]: New session 1 of user core. Jan 13 20:33:43.224876 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:33:43.232659 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:33:43.249328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:33:43.255704 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:33:43.268707 (systemd)[1989]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:33:43.376510 systemd[1989]: Queued start job for default target default.target. Jan 13 20:33:43.377346 systemd[1989]: Created slice app.slice - User Application Slice. Jan 13 20:33:43.377374 systemd[1989]: Reached target paths.target - Paths. Jan 13 20:33:43.377387 systemd[1989]: Reached target timers.target - Timers. Jan 13 20:33:43.389481 systemd[1989]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:33:43.399620 systemd[1989]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:33:43.399684 systemd[1989]: Reached target sockets.target - Sockets. Jan 13 20:33:43.399699 systemd[1989]: Reached target basic.target - Basic System. Jan 13 20:33:43.399993 systemd[1989]: Reached target default.target - Main User Target. Jan 13 20:33:43.400036 systemd[1989]: Startup finished in 124ms. Jan 13 20:33:43.400512 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:33:43.411299 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:33:44.111693 systemd[1]: Started sshd@1-138.199.152.196:22-147.75.109.163:49592.service - OpenSSH per-connection server daemon (147.75.109.163:49592). Jan 13 20:33:44.479337 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 13 20:33:44.488860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:44.617480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:44.639956 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:44.697733 kubelet[2015]: E0113 20:33:44.697640 2015 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:44.699964 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:44.700110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:45.109515 sshd[2001]: Accepted publickey for core from 147.75.109.163 port 49592 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:33:45.111484 sshd-session[2001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:45.119167 systemd-logind[1591]: New session 2 of user core. Jan 13 20:33:45.130377 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:33:45.797242 sshd[2025]: Connection closed by 147.75.109.163 port 49592 Jan 13 20:33:45.796324 sshd-session[2001]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:45.800923 systemd[1]: sshd@1-138.199.152.196:22-147.75.109.163:49592.service: Deactivated successfully. Jan 13 20:33:45.805484 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:33:45.806112 systemd-logind[1591]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:33:45.808180 systemd-logind[1591]: Removed session 2. Jan 13 20:33:45.961564 systemd[1]: Started sshd@2-138.199.152.196:22-147.75.109.163:49596.service - OpenSSH per-connection server daemon (147.75.109.163:49596). Jan 13 20:33:46.934145 sshd[2030]: Accepted publickey for core from 147.75.109.163 port 49596 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:33:46.935686 sshd-session[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:46.941589 systemd-logind[1591]: New session 3 of user core. Jan 13 20:33:46.953570 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:33:47.606473 sshd[2033]: Connection closed by 147.75.109.163 port 49596 Jan 13 20:33:47.607104 sshd-session[2030]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:47.610141 systemd-logind[1591]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:33:47.612476 systemd[1]: sshd@2-138.199.152.196:22-147.75.109.163:49596.service: Deactivated successfully. Jan 13 20:33:47.615031 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:33:47.616560 systemd-logind[1591]: Removed session 3. Jan 13 20:33:47.779715 systemd[1]: Started sshd@3-138.199.152.196:22-147.75.109.163:58474.service - OpenSSH per-connection server daemon (147.75.109.163:58474). Jan 13 20:33:48.773930 sshd[2038]: Accepted publickey for core from 147.75.109.163 port 58474 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:33:48.775916 sshd-session[2038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:48.783313 systemd-logind[1591]: New session 4 of user core. Jan 13 20:33:48.789412 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:33:49.461449 sshd[2041]: Connection closed by 147.75.109.163 port 58474 Jan 13 20:33:49.462420 sshd-session[2038]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:49.466494 systemd-logind[1591]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:33:49.467753 systemd[1]: sshd@3-138.199.152.196:22-147.75.109.163:58474.service: Deactivated successfully. Jan 13 20:33:49.472362 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:33:49.473969 systemd-logind[1591]: Removed session 4. Jan 13 20:33:49.627847 systemd[1]: Started sshd@4-138.199.152.196:22-147.75.109.163:58490.service - OpenSSH per-connection server daemon (147.75.109.163:58490). Jan 13 20:33:50.607401 sshd[2046]: Accepted publickey for core from 147.75.109.163 port 58490 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:33:50.609196 sshd-session[2046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:50.614255 systemd-logind[1591]: New session 5 of user core. Jan 13 20:33:50.624759 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:33:51.141751 sudo[2050]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:33:51.142106 sudo[2050]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:51.157657 sudo[2050]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:51.317123 sshd[2049]: Connection closed by 147.75.109.163 port 58490 Jan 13 20:33:51.318271 sshd-session[2046]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:51.323327 systemd[1]: sshd@4-138.199.152.196:22-147.75.109.163:58490.service: Deactivated successfully. Jan 13 20:33:51.327650 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:33:51.328545 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:33:51.329576 systemd-logind[1591]: Removed session 5. Jan 13 20:33:51.485646 systemd[1]: Started sshd@5-138.199.152.196:22-147.75.109.163:58506.service - OpenSSH per-connection server daemon (147.75.109.163:58506). Jan 13 20:33:52.461752 sshd[2055]: Accepted publickey for core from 147.75.109.163 port 58506 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:33:52.463946 sshd-session[2055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:52.469830 systemd-logind[1591]: New session 6 of user core. Jan 13 20:33:52.476805 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:33:52.980072 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:33:52.980844 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:52.985038 sudo[2060]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:52.991448 sudo[2059]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:33:52.991755 sudo[2059]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:53.014788 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:33:53.048388 augenrules[2082]: No rules Jan 13 20:33:53.049691 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:33:53.050011 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:33:53.052350 sudo[2059]: pam_unix(sudo:session): session closed for user root Jan 13 20:33:53.210257 sshd[2058]: Connection closed by 147.75.109.163 port 58506 Jan 13 20:33:53.210997 sshd-session[2055]: pam_unix(sshd:session): session closed for user core Jan 13 20:33:53.216988 systemd[1]: sshd@5-138.199.152.196:22-147.75.109.163:58506.service: Deactivated successfully. Jan 13 20:33:53.218456 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:33:53.220646 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:33:53.221587 systemd-logind[1591]: Removed session 6. Jan 13 20:33:53.375824 systemd[1]: Started sshd@6-138.199.152.196:22-147.75.109.163:58514.service - OpenSSH per-connection server daemon (147.75.109.163:58514). Jan 13 20:33:54.356514 sshd[2091]: Accepted publickey for core from 147.75.109.163 port 58514 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:33:54.358551 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:33:54.365488 systemd-logind[1591]: New session 7 of user core. Jan 13 20:33:54.375742 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:33:54.729039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 13 20:33:54.740783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:33:54.867498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:33:54.876950 sudo[2108]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:33:54.877248 sudo[2108]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:33:54.878729 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:33:54.931858 kubelet[2107]: E0113 20:33:54.931724 2107 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:33:54.937563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:33:54.937712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:33:55.213691 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 20:33:55.214125 (dockerd)[2133]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 20:33:55.450350 dockerd[2133]: time="2025-01-13T20:33:55.450295953Z" level=info msg="Starting up" Jan 13 20:33:55.582050 dockerd[2133]: time="2025-01-13T20:33:55.581712691Z" level=info msg="Loading containers: start." Jan 13 20:33:55.756354 kernel: Initializing XFRM netlink socket Jan 13 20:33:55.845461 systemd-networkd[1238]: docker0: Link UP Jan 13 20:33:55.888279 dockerd[2133]: time="2025-01-13T20:33:55.887394364Z" level=info msg="Loading containers: done." Jan 13 20:33:55.901691 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2974114601-merged.mount: Deactivated successfully. Jan 13 20:33:55.904428 dockerd[2133]: time="2025-01-13T20:33:55.904376034Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 20:33:55.904524 dockerd[2133]: time="2025-01-13T20:33:55.904485674Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 13 20:33:55.904629 dockerd[2133]: time="2025-01-13T20:33:55.904597154Z" level=info msg="Daemon has completed initialization" Jan 13 20:33:55.948499 dockerd[2133]: time="2025-01-13T20:33:55.947566509Z" level=info msg="API listen on /run/docker.sock" Jan 13 20:33:55.948428 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 20:33:57.170283 containerd[1614]: time="2025-01-13T20:33:57.170090130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 20:33:57.823900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880115383.mount: Deactivated successfully. Jan 13 20:33:59.127240 containerd[1614]: time="2025-01-13T20:33:59.125449681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:59.127240 containerd[1614]: time="2025-01-13T20:33:59.126970597Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201342" Jan 13 20:33:59.128441 containerd[1614]: time="2025-01-13T20:33:59.128399874Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:59.132192 containerd[1614]: time="2025-01-13T20:33:59.132147384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:33:59.135022 containerd[1614]: time="2025-01-13T20:33:59.134970816Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.964827006s" Jan 13 20:33:59.135184 containerd[1614]: time="2025-01-13T20:33:59.135162816Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Jan 13 20:33:59.160486 containerd[1614]: time="2025-01-13T20:33:59.160450589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 20:34:00.989253 containerd[1614]: time="2025-01-13T20:34:00.988804946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:00.990786 containerd[1614]: time="2025-01-13T20:34:00.990448022Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381317" Jan 13 20:34:00.992034 containerd[1614]: time="2025-01-13T20:34:00.991930818Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:00.996334 containerd[1614]: time="2025-01-13T20:34:00.996245647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:01.000219 containerd[1614]: time="2025-01-13T20:34:01.000154597Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.839430089s" Jan 13 20:34:01.000346 containerd[1614]: time="2025-01-13T20:34:01.000241877Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Jan 13 20:34:01.027238 containerd[1614]: time="2025-01-13T20:34:01.027124049Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 20:34:02.141487 containerd[1614]: time="2025-01-13T20:34:02.141431172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:02.143786 containerd[1614]: time="2025-01-13T20:34:02.143728646Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765660" Jan 13 20:34:02.147237 containerd[1614]: time="2025-01-13T20:34:02.145939161Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:02.149300 containerd[1614]: time="2025-01-13T20:34:02.149247792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:02.150979 containerd[1614]: time="2025-01-13T20:34:02.150935148Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.123764539s" Jan 13 20:34:02.151074 containerd[1614]: time="2025-01-13T20:34:02.151022628Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Jan 13 20:34:02.173916 containerd[1614]: time="2025-01-13T20:34:02.173870652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 20:34:03.189860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992321676.mount: Deactivated successfully. Jan 13 20:34:03.506251 containerd[1614]: time="2025-01-13T20:34:03.506167803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:03.507565 containerd[1614]: time="2025-01-13T20:34:03.507528959Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25274003" Jan 13 20:34:03.508449 containerd[1614]: time="2025-01-13T20:34:03.508370517Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:03.510537 containerd[1614]: time="2025-01-13T20:34:03.510482272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:03.511535 containerd[1614]: time="2025-01-13T20:34:03.511381270Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.337459298s" Jan 13 20:34:03.511535 containerd[1614]: time="2025-01-13T20:34:03.511415630Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Jan 13 20:34:03.534981 containerd[1614]: time="2025-01-13T20:34:03.534934053Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 20:34:04.162277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2170345175.mount: Deactivated successfully. Jan 13 20:34:04.886515 containerd[1614]: time="2025-01-13T20:34:04.886438212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:04.888713 containerd[1614]: time="2025-01-13T20:34:04.888192808Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Jan 13 20:34:04.890415 containerd[1614]: time="2025-01-13T20:34:04.889767764Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:04.893148 containerd[1614]: time="2025-01-13T20:34:04.893100836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:04.894474 containerd[1614]: time="2025-01-13T20:34:04.894444313Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.35946742s" Jan 13 20:34:04.894578 containerd[1614]: time="2025-01-13T20:34:04.894563873Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 13 20:34:04.917530 containerd[1614]: time="2025-01-13T20:34:04.917486979Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 20:34:04.978757 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 13 20:34:04.987535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:05.127657 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:05.142953 (kubelet)[2475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:34:05.198044 kubelet[2475]: E0113 20:34:05.197965 2475 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:34:05.201780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:34:05.201970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:34:05.446544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3430157136.mount: Deactivated successfully. Jan 13 20:34:05.455194 containerd[1614]: time="2025-01-13T20:34:05.455138259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:05.456656 containerd[1614]: time="2025-01-13T20:34:05.456594855Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Jan 13 20:34:05.457978 containerd[1614]: time="2025-01-13T20:34:05.457922652Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:05.460433 containerd[1614]: time="2025-01-13T20:34:05.460357327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:05.461398 containerd[1614]: time="2025-01-13T20:34:05.461354404Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 543.805306ms" Jan 13 20:34:05.461398 containerd[1614]: time="2025-01-13T20:34:05.461394684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 13 20:34:05.486367 containerd[1614]: time="2025-01-13T20:34:05.486330507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 20:34:06.122128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082570054.mount: Deactivated successfully. Jan 13 20:34:08.049624 containerd[1614]: time="2025-01-13T20:34:08.048389776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:08.049624 containerd[1614]: time="2025-01-13T20:34:08.049571014Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Jan 13 20:34:08.050333 containerd[1614]: time="2025-01-13T20:34:08.050294052Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:08.054834 containerd[1614]: time="2025-01-13T20:34:08.054774443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:08.056048 containerd[1614]: time="2025-01-13T20:34:08.056005880Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.569442093s" Jan 13 20:34:08.056048 containerd[1614]: time="2025-01-13T20:34:08.056047080Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jan 13 20:34:12.695824 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:12.704879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:12.733423 systemd[1]: Reloading requested from client PID 2605 ('systemctl') (unit session-7.scope)... Jan 13 20:34:12.733439 systemd[1]: Reloading... Jan 13 20:34:12.852381 zram_generator::config[2649]: No configuration found. Jan 13 20:34:12.961760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:34:13.024995 systemd[1]: Reloading finished in 291 ms. Jan 13 20:34:13.076413 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:34:13.076493 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:34:13.076960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:13.084858 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:13.194453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:13.209785 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:34:13.263965 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:13.265243 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:34:13.265243 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:13.265243 kubelet[2705]: I0113 20:34:13.264465 2705 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:34:14.461972 kubelet[2705]: I0113 20:34:14.461930 2705 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:34:14.462516 kubelet[2705]: I0113 20:34:14.462496 2705 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:34:14.462830 kubelet[2705]: I0113 20:34:14.462810 2705 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:34:14.483258 kubelet[2705]: I0113 20:34:14.483201 2705 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:34:14.483498 kubelet[2705]: E0113 20:34:14.483474 2705 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.152.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.494195 kubelet[2705]: I0113 20:34:14.494148 2705 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:34:14.494815 kubelet[2705]: I0113 20:34:14.494749 2705 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:34:14.497274 kubelet[2705]: I0113 20:34:14.495086 2705 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:34:14.497274 kubelet[2705]: I0113 20:34:14.495112 2705 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:34:14.497274 kubelet[2705]: I0113 20:34:14.495122 2705 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:34:14.499119 kubelet[2705]: I0113 20:34:14.498667 2705 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:14.502294 kubelet[2705]: I0113 20:34:14.502261 2705 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:34:14.502444 kubelet[2705]: I0113 20:34:14.502434 2705 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:34:14.502513 kubelet[2705]: I0113 20:34:14.502503 2705 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:34:14.502813 kubelet[2705]: I0113 20:34:14.502557 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:34:14.504904 kubelet[2705]: W0113 20:34:14.504849 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.152.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-5d4da4afb6&limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.505050 kubelet[2705]: E0113 20:34:14.505037 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.152.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-5d4da4afb6&limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.505199 kubelet[2705]: W0113 20:34:14.505171 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.152.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.505308 kubelet[2705]: E0113 20:34:14.505296 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.152.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.507248 kubelet[2705]: I0113 20:34:14.505497 2705 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:34:14.507248 kubelet[2705]: I0113 20:34:14.506085 2705 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:34:14.507248 kubelet[2705]: W0113 20:34:14.506332 2705 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:34:14.507742 kubelet[2705]: I0113 20:34:14.507721 2705 server.go:1256] "Started kubelet" Jan 13 20:34:14.509815 kubelet[2705]: I0113 20:34:14.509737 2705 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:34:14.510699 kubelet[2705]: I0113 20:34:14.510670 2705 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:34:14.511447 kubelet[2705]: I0113 20:34:14.511424 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:34:14.511980 kubelet[2705]: I0113 20:34:14.511937 2705 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:34:14.514562 kubelet[2705]: I0113 20:34:14.514519 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:34:14.515719 kubelet[2705]: E0113 20:34:14.515656 2705 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.152.196:6443/api/v1/namespaces/default/events\": dial tcp 138.199.152.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-6-5d4da4afb6.181a5acfb8aafd5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-5d4da4afb6,UID:ci-4152-2-0-6-5d4da4afb6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-5d4da4afb6,},FirstTimestamp:2025-01-13 20:34:14.507691356 +0000 UTC m=+1.293201159,LastTimestamp:2025-01-13 20:34:14.507691356 +0000 UTC m=+1.293201159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-5d4da4afb6,}" Jan 13 20:34:14.521808 kubelet[2705]: I0113 20:34:14.521767 2705 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:34:14.522413 kubelet[2705]: I0113 20:34:14.522376 2705 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:34:14.522468 kubelet[2705]: I0113 20:34:14.522440 2705 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:34:14.522558 kubelet[2705]: E0113 20:34:14.522545 2705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.152.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-5d4da4afb6?timeout=10s\": dial tcp 138.199.152.196:6443: connect: connection refused" interval="200ms" Jan 13 20:34:14.523753 kubelet[2705]: W0113 20:34:14.523710 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.152.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.523895 kubelet[2705]: E0113 20:34:14.523884 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.152.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.524263 kubelet[2705]: I0113 20:34:14.524243 2705 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:34:14.524450 kubelet[2705]: I0113 20:34:14.524430 2705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:34:14.526150 kubelet[2705]: I0113 20:34:14.526126 2705 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:34:14.537864 kubelet[2705]: I0113 20:34:14.537821 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:34:14.538169 kubelet[2705]: E0113 20:34:14.538148 2705 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:34:14.539162 kubelet[2705]: I0113 20:34:14.539116 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:34:14.539162 kubelet[2705]: I0113 20:34:14.539148 2705 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:34:14.539162 kubelet[2705]: I0113 20:34:14.539166 2705 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:34:14.539283 kubelet[2705]: E0113 20:34:14.539236 2705 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:34:14.552235 kubelet[2705]: W0113 20:34:14.552168 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.152.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.552486 kubelet[2705]: E0113 20:34:14.552259 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.152.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:14.573528 kubelet[2705]: I0113 20:34:14.573489 2705 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:34:14.573528 kubelet[2705]: I0113 20:34:14.573519 2705 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:34:14.573528 kubelet[2705]: I0113 20:34:14.573542 2705 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:14.577444 kubelet[2705]: I0113 20:34:14.577356 2705 policy_none.go:49] "None policy: Start" Jan 13 20:34:14.579055 kubelet[2705]: I0113 20:34:14.578580 2705 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:34:14.579055 kubelet[2705]: I0113 20:34:14.578665 2705 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:34:14.587535 kubelet[2705]: I0113 20:34:14.587289 2705 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:34:14.587933 kubelet[2705]: I0113 20:34:14.587771 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:34:14.589779 kubelet[2705]: E0113 20:34:14.589756 2705 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-6-5d4da4afb6\" not found" Jan 13 20:34:14.625474 kubelet[2705]: I0113 20:34:14.625406 2705 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.626241 kubelet[2705]: E0113 20:34:14.626152 2705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.152.196:6443/api/v1/nodes\": dial tcp 138.199.152.196:6443: connect: connection refused" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.639517 kubelet[2705]: I0113 20:34:14.639462 2705 topology_manager.go:215] "Topology Admit Handler" podUID="74fb0c1030837c48a9e91459776c7b69" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.642341 kubelet[2705]: I0113 20:34:14.642062 2705 topology_manager.go:215] "Topology Admit Handler" podUID="3f1a157a101636aacb254c3aa32d87a2" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.645031 kubelet[2705]: I0113 20:34:14.644840 2705 topology_manager.go:215] "Topology Admit Handler" podUID="deface5525ac7552237e2547399829da" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723580 kubelet[2705]: I0113 20:34:14.723256 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74fb0c1030837c48a9e91459776c7b69-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" (UID: \"74fb0c1030837c48a9e91459776c7b69\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723580 kubelet[2705]: I0113 20:34:14.723331 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74fb0c1030837c48a9e91459776c7b69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" (UID: \"74fb0c1030837c48a9e91459776c7b69\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723580 kubelet[2705]: E0113 20:34:14.723345 2705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.152.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-5d4da4afb6?timeout=10s\": dial tcp 138.199.152.196:6443: connect: connection refused" interval="400ms" Jan 13 20:34:14.723580 kubelet[2705]: I0113 20:34:14.723361 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723580 kubelet[2705]: I0113 20:34:14.723433 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723965 kubelet[2705]: I0113 20:34:14.723484 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723965 kubelet[2705]: I0113 20:34:14.723542 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deface5525ac7552237e2547399829da-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-6-5d4da4afb6\" (UID: \"deface5525ac7552237e2547399829da\") " pod="kube-system/kube-scheduler-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723965 kubelet[2705]: I0113 20:34:14.723598 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74fb0c1030837c48a9e91459776c7b69-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" (UID: \"74fb0c1030837c48a9e91459776c7b69\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723965 kubelet[2705]: I0113 20:34:14.723662 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.723965 kubelet[2705]: I0113 20:34:14.723708 2705 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.829407 kubelet[2705]: I0113 20:34:14.829343 2705 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.829936 kubelet[2705]: E0113 20:34:14.829918 2705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.152.196:6443/api/v1/nodes\": dial tcp 138.199.152.196:6443: connect: connection refused" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:14.951814 containerd[1614]: time="2025-01-13T20:34:14.951766555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-6-5d4da4afb6,Uid:74fb0c1030837c48a9e91459776c7b69,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:14.955093 containerd[1614]: time="2025-01-13T20:34:14.955055788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-6-5d4da4afb6,Uid:3f1a157a101636aacb254c3aa32d87a2,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:14.958370 containerd[1614]: time="2025-01-13T20:34:14.958332502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-6-5d4da4afb6,Uid:deface5525ac7552237e2547399829da,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:15.124800 kubelet[2705]: E0113 20:34:15.124720 2705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.152.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-5d4da4afb6?timeout=10s\": dial tcp 138.199.152.196:6443: connect: connection refused" interval="800ms" Jan 13 20:34:15.233029 kubelet[2705]: I0113 20:34:15.232485 2705 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:15.233029 kubelet[2705]: E0113 20:34:15.232992 2705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.152.196:6443/api/v1/nodes\": dial tcp 138.199.152.196:6443: connect: connection refused" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:15.330446 kubelet[2705]: E0113 20:34:15.330400 2705 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.152.196:6443/api/v1/namespaces/default/events\": dial tcp 138.199.152.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-6-5d4da4afb6.181a5acfb8aafd5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-6-5d4da4afb6,UID:ci-4152-2-0-6-5d4da4afb6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-6-5d4da4afb6,},FirstTimestamp:2025-01-13 20:34:14.507691356 +0000 UTC m=+1.293201159,LastTimestamp:2025-01-13 20:34:14.507691356 +0000 UTC m=+1.293201159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-6-5d4da4afb6,}" Jan 13 20:34:15.422181 kubelet[2705]: W0113 20:34:15.422021 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.152.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:15.422181 kubelet[2705]: E0113 20:34:15.422088 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.152.196:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:15.448497 kubelet[2705]: W0113 20:34:15.448269 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.152.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:15.448497 kubelet[2705]: E0113 20:34:15.448333 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.152.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:15.452946 kubelet[2705]: W0113 20:34:15.452855 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.152.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-5d4da4afb6&limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:15.452946 kubelet[2705]: E0113 20:34:15.452922 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.152.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-6-5d4da4afb6&limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:15.529566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666652234.mount: Deactivated successfully. Jan 13 20:34:15.536802 containerd[1614]: time="2025-01-13T20:34:15.536717586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:15.539260 containerd[1614]: time="2025-01-13T20:34:15.539181622Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Jan 13 20:34:15.541332 containerd[1614]: time="2025-01-13T20:34:15.541257098Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:15.543721 containerd[1614]: time="2025-01-13T20:34:15.543646134Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:15.546454 containerd[1614]: time="2025-01-13T20:34:15.546349608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:34:15.549663 containerd[1614]: time="2025-01-13T20:34:15.549526323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:15.551314 containerd[1614]: time="2025-01-13T20:34:15.551248599Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:34:15.555809 containerd[1614]: time="2025-01-13T20:34:15.555737751Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.879157ms" Jan 13 20:34:15.561630 containerd[1614]: time="2025-01-13T20:34:15.560703862Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:34:15.567296 containerd[1614]: time="2025-01-13T20:34:15.567217610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 608.790668ms" Jan 13 20:34:15.570997 containerd[1614]: time="2025-01-13T20:34:15.570946523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 615.805575ms" Jan 13 20:34:15.705842 containerd[1614]: time="2025-01-13T20:34:15.705559953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:15.705842 containerd[1614]: time="2025-01-13T20:34:15.705666873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:15.705842 containerd[1614]: time="2025-01-13T20:34:15.705687553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:15.706879 containerd[1614]: time="2025-01-13T20:34:15.706706311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:15.709160 containerd[1614]: time="2025-01-13T20:34:15.708995106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:15.709160 containerd[1614]: time="2025-01-13T20:34:15.709162346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:15.709404 containerd[1614]: time="2025-01-13T20:34:15.709193106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:15.709404 containerd[1614]: time="2025-01-13T20:34:15.709345306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:15.713669 containerd[1614]: time="2025-01-13T20:34:15.713311698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:15.713669 containerd[1614]: time="2025-01-13T20:34:15.713375898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:15.713669 containerd[1614]: time="2025-01-13T20:34:15.713387458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:15.713669 containerd[1614]: time="2025-01-13T20:34:15.713506498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:15.794195 containerd[1614]: time="2025-01-13T20:34:15.794125788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-6-5d4da4afb6,Uid:3f1a157a101636aacb254c3aa32d87a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c65363fe7d6876d91aa0db7580a87419911b851106a6974c8ab92191753bb327\"" Jan 13 20:34:15.800595 containerd[1614]: time="2025-01-13T20:34:15.800555376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-6-5d4da4afb6,Uid:deface5525ac7552237e2547399829da,Namespace:kube-system,Attempt:0,} returns sandbox id \"54ee428cd9e95da88a739cc851efafde4f332ef83deec06aa0a92c553e1f6ed5\"" Jan 13 20:34:15.804230 containerd[1614]: time="2025-01-13T20:34:15.802705452Z" level=info msg="CreateContainer within sandbox \"c65363fe7d6876d91aa0db7580a87419911b851106a6974c8ab92191753bb327\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 20:34:15.806556 containerd[1614]: time="2025-01-13T20:34:15.806521405Z" level=info msg="CreateContainer within sandbox \"54ee428cd9e95da88a739cc851efafde4f332ef83deec06aa0a92c553e1f6ed5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 20:34:15.807387 containerd[1614]: time="2025-01-13T20:34:15.807360404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-6-5d4da4afb6,Uid:74fb0c1030837c48a9e91459776c7b69,Namespace:kube-system,Attempt:0,} returns sandbox id \"03a0cec410bb4ff2b2f2c10ba63d05c855686f2ebe59e480e69646a655f2f552\"" Jan 13 20:34:15.811846 containerd[1614]: time="2025-01-13T20:34:15.811794636Z" level=info msg="CreateContainer within sandbox \"03a0cec410bb4ff2b2f2c10ba63d05c855686f2ebe59e480e69646a655f2f552\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 20:34:15.832356 containerd[1614]: time="2025-01-13T20:34:15.832311797Z" level=info msg="CreateContainer within sandbox \"54ee428cd9e95da88a739cc851efafde4f332ef83deec06aa0a92c553e1f6ed5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a83cf76a0b85f7c27302bc7a4137ab6f0564869e4fc764f4597b777df0b8264a\"" Jan 13 20:34:15.833803 containerd[1614]: time="2025-01-13T20:34:15.833744915Z" level=info msg="CreateContainer within sandbox \"c65363fe7d6876d91aa0db7580a87419911b851106a6974c8ab92191753bb327\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7039769fd8ce3640d626be1e8f0cadc2c75371afff9867e48f67a0ca3cceb428\"" Jan 13 20:34:15.834157 containerd[1614]: time="2025-01-13T20:34:15.834134354Z" level=info msg="StartContainer for \"a83cf76a0b85f7c27302bc7a4137ab6f0564869e4fc764f4597b777df0b8264a\"" Jan 13 20:34:15.834800 containerd[1614]: time="2025-01-13T20:34:15.834777433Z" level=info msg="StartContainer for \"7039769fd8ce3640d626be1e8f0cadc2c75371afff9867e48f67a0ca3cceb428\"" Jan 13 20:34:15.837234 containerd[1614]: time="2025-01-13T20:34:15.835840831Z" level=info msg="CreateContainer within sandbox \"03a0cec410bb4ff2b2f2c10ba63d05c855686f2ebe59e480e69646a655f2f552\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d9b8da468a89fe99699a056624e4bfd1ffe20bdcb811ee2d7ec22619164a041\"" Jan 13 20:34:15.837792 containerd[1614]: time="2025-01-13T20:34:15.837767547Z" level=info msg="StartContainer for \"4d9b8da468a89fe99699a056624e4bfd1ffe20bdcb811ee2d7ec22619164a041\"" Jan 13 20:34:15.923957 containerd[1614]: time="2025-01-13T20:34:15.923749548Z" level=info msg="StartContainer for \"7039769fd8ce3640d626be1e8f0cadc2c75371afff9867e48f67a0ca3cceb428\" returns successfully" Jan 13 20:34:15.925261 kubelet[2705]: E0113 20:34:15.925232 2705 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.152.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-6-5d4da4afb6?timeout=10s\": dial tcp 138.199.152.196:6443: connect: connection refused" interval="1.6s" Jan 13 20:34:15.965403 containerd[1614]: time="2025-01-13T20:34:15.963384874Z" level=info msg="StartContainer for \"4d9b8da468a89fe99699a056624e4bfd1ffe20bdcb811ee2d7ec22619164a041\" returns successfully" Jan 13 20:34:15.979761 containerd[1614]: time="2025-01-13T20:34:15.979635844Z" level=info msg="StartContainer for \"a83cf76a0b85f7c27302bc7a4137ab6f0564869e4fc764f4597b777df0b8264a\" returns successfully" Jan 13 20:34:16.033307 kubelet[2705]: W0113 20:34:16.033237 2705 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.152.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:16.033307 kubelet[2705]: E0113 20:34:16.033303 2705 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.152.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.152.196:6443: connect: connection refused Jan 13 20:34:16.036626 kubelet[2705]: I0113 20:34:16.036570 2705 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:16.037019 kubelet[2705]: E0113 20:34:16.036994 2705 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.152.196:6443/api/v1/nodes\": dial tcp 138.199.152.196:6443: connect: connection refused" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:17.640738 kubelet[2705]: I0113 20:34:17.640703 2705 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:18.436429 kubelet[2705]: I0113 20:34:18.436381 2705 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:18.508221 kubelet[2705]: I0113 20:34:18.506490 2705 apiserver.go:52] "Watching apiserver" Jan 13 20:34:18.523094 kubelet[2705]: I0113 20:34:18.523036 2705 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:34:18.536169 kubelet[2705]: E0113 20:34:18.536119 2705 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 13 20:34:18.624042 kubelet[2705]: E0113 20:34:18.623963 2705 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:21.403531 systemd[1]: Reloading requested from client PID 2977 ('systemctl') (unit session-7.scope)... Jan 13 20:34:21.403550 systemd[1]: Reloading... Jan 13 20:34:21.482234 zram_generator::config[3017]: No configuration found. Jan 13 20:34:21.605249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:34:21.694668 systemd[1]: Reloading finished in 290 ms. Jan 13 20:34:21.730720 kubelet[2705]: I0113 20:34:21.730449 2705 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:34:21.731286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:21.744761 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:34:21.745315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:21.754192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:34:21.898296 (kubelet)[3072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:34:21.898306 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:34:21.959988 kubelet[3072]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:21.959988 kubelet[3072]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:34:21.959988 kubelet[3072]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:34:21.959988 kubelet[3072]: I0113 20:34:21.959321 3072 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:34:21.966461 kubelet[3072]: I0113 20:34:21.965193 3072 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 20:34:21.966461 kubelet[3072]: I0113 20:34:21.965246 3072 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:34:21.966461 kubelet[3072]: I0113 20:34:21.966316 3072 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 20:34:21.969145 kubelet[3072]: I0113 20:34:21.969090 3072 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 20:34:21.972557 kubelet[3072]: I0113 20:34:21.972515 3072 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:34:21.986858 kubelet[3072]: I0113 20:34:21.986790 3072 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:34:21.987371 kubelet[3072]: I0113 20:34:21.987343 3072 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:34:21.988673 kubelet[3072]: I0113 20:34:21.987992 3072 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:34:21.988673 kubelet[3072]: I0113 20:34:21.988048 3072 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:34:21.988673 kubelet[3072]: I0113 20:34:21.988058 3072 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:34:21.988673 kubelet[3072]: I0113 20:34:21.988097 3072 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:21.988673 kubelet[3072]: I0113 20:34:21.988352 3072 kubelet.go:396] "Attempting to sync node with API server" Jan 13 20:34:21.988673 kubelet[3072]: I0113 20:34:21.988372 3072 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:34:21.988673 kubelet[3072]: I0113 20:34:21.988440 3072 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:34:21.988989 kubelet[3072]: I0113 20:34:21.988457 3072 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:34:21.994238 kubelet[3072]: I0113 20:34:21.994078 3072 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:34:21.994368 kubelet[3072]: I0113 20:34:21.994288 3072 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:34:21.994698 kubelet[3072]: I0113 20:34:21.994670 3072 server.go:1256] "Started kubelet" Jan 13 20:34:21.998846 kubelet[3072]: E0113 20:34:21.998821 3072 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:34:21.999234 kubelet[3072]: I0113 20:34:21.999084 3072 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:34:22.000139 kubelet[3072]: I0113 20:34:22.000116 3072 server.go:461] "Adding debug handlers to kubelet server" Jan 13 20:34:22.001402 kubelet[3072]: I0113 20:34:22.001378 3072 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:34:22.001589 kubelet[3072]: I0113 20:34:22.001571 3072 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:34:22.002302 kubelet[3072]: I0113 20:34:22.002262 3072 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:34:22.013880 kubelet[3072]: I0113 20:34:22.013282 3072 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:34:22.016450 kubelet[3072]: I0113 20:34:22.016424 3072 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 20:34:22.016848 kubelet[3072]: I0113 20:34:22.016832 3072 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 20:34:22.019007 kubelet[3072]: I0113 20:34:22.018986 3072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:34:22.022416 kubelet[3072]: I0113 20:34:22.022391 3072 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:34:22.022649 kubelet[3072]: I0113 20:34:22.022620 3072 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:34:22.022734 kubelet[3072]: I0113 20:34:22.022724 3072 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 20:34:22.022844 kubelet[3072]: E0113 20:34:22.022835 3072 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:34:22.061059 kubelet[3072]: I0113 20:34:22.054736 3072 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:34:22.061059 kubelet[3072]: I0113 20:34:22.054844 3072 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:34:22.069389 kubelet[3072]: I0113 20:34:22.069353 3072 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:34:22.121872 kubelet[3072]: I0113 20:34:22.121835 3072 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.124086 kubelet[3072]: E0113 20:34:22.123482 3072 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 20:34:22.139831 kubelet[3072]: I0113 20:34:22.139182 3072 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.140826 kubelet[3072]: I0113 20:34:22.140803 3072 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.160902 kubelet[3072]: I0113 20:34:22.160406 3072 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:34:22.160902 kubelet[3072]: I0113 20:34:22.160434 3072 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:34:22.160902 kubelet[3072]: I0113 20:34:22.160452 3072 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:34:22.160902 kubelet[3072]: I0113 20:34:22.160612 3072 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 20:34:22.160902 kubelet[3072]: I0113 20:34:22.160636 3072 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 20:34:22.160902 kubelet[3072]: I0113 20:34:22.160644 3072 policy_none.go:49] "None policy: Start" Jan 13 20:34:22.162928 kubelet[3072]: I0113 20:34:22.161698 3072 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:34:22.162928 kubelet[3072]: I0113 20:34:22.161730 3072 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:34:22.162928 kubelet[3072]: I0113 20:34:22.161872 3072 state_mem.go:75] "Updated machine memory state" Jan 13 20:34:22.165649 kubelet[3072]: I0113 20:34:22.165032 3072 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:34:22.170333 kubelet[3072]: I0113 20:34:22.170295 3072 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:34:22.325520 kubelet[3072]: I0113 20:34:22.325412 3072 topology_manager.go:215] "Topology Admit Handler" podUID="74fb0c1030837c48a9e91459776c7b69" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.325863 kubelet[3072]: I0113 20:34:22.325627 3072 topology_manager.go:215] "Topology Admit Handler" podUID="3f1a157a101636aacb254c3aa32d87a2" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.325863 kubelet[3072]: I0113 20:34:22.325797 3072 topology_manager.go:215] "Topology Admit Handler" podUID="deface5525ac7552237e2547399829da" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421787 kubelet[3072]: I0113 20:34:22.421351 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74fb0c1030837c48a9e91459776c7b69-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" (UID: \"74fb0c1030837c48a9e91459776c7b69\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421787 kubelet[3072]: I0113 20:34:22.421400 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74fb0c1030837c48a9e91459776c7b69-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" (UID: \"74fb0c1030837c48a9e91459776c7b69\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421787 kubelet[3072]: I0113 20:34:22.421423 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74fb0c1030837c48a9e91459776c7b69-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" (UID: \"74fb0c1030837c48a9e91459776c7b69\") " pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421787 kubelet[3072]: I0113 20:34:22.421445 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421787 kubelet[3072]: I0113 20:34:22.421464 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421993 kubelet[3072]: I0113 20:34:22.421483 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/deface5525ac7552237e2547399829da-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-6-5d4da4afb6\" (UID: \"deface5525ac7552237e2547399829da\") " pod="kube-system/kube-scheduler-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421993 kubelet[3072]: I0113 20:34:22.421502 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421993 kubelet[3072]: I0113 20:34:22.421522 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.421993 kubelet[3072]: I0113 20:34:22.421543 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f1a157a101636aacb254c3aa32d87a2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-6-5d4da4afb6\" (UID: \"3f1a157a101636aacb254c3aa32d87a2\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:22.993952 kubelet[3072]: I0113 20:34:22.993888 3072 apiserver.go:52] "Watching apiserver" Jan 13 20:34:23.016951 kubelet[3072]: I0113 20:34:23.016895 3072 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 20:34:23.134075 kubelet[3072]: E0113 20:34:23.134039 3072 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-6-5d4da4afb6\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" Jan 13 20:34:23.173717 kubelet[3072]: I0113 20:34:23.173579 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-6-5d4da4afb6" podStartSLOduration=1.17340482 podStartE2EDuration="1.17340482s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:23.172672302 +0000 UTC m=+1.267714627" watchObservedRunningTime="2025-01-13 20:34:23.17340482 +0000 UTC m=+1.268447185" Jan 13 20:34:23.174092 kubelet[3072]: I0113 20:34:23.174063 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-6-5d4da4afb6" podStartSLOduration=1.17387698 podStartE2EDuration="1.17387698s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:23.144281267 +0000 UTC m=+1.239323632" watchObservedRunningTime="2025-01-13 20:34:23.17387698 +0000 UTC m=+1.268919345" Jan 13 20:34:23.240529 kubelet[3072]: I0113 20:34:23.240485 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-6-5d4da4afb6" podStartSLOduration=1.240444754 podStartE2EDuration="1.240444754s" podCreationTimestamp="2025-01-13 20:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:23.215661713 +0000 UTC m=+1.310704118" watchObservedRunningTime="2025-01-13 20:34:23.240444754 +0000 UTC m=+1.335487119" Jan 13 20:34:27.030508 sudo[2108]: pam_unix(sudo:session): session closed for user root Jan 13 20:34:27.188898 sshd[2094]: Connection closed by 147.75.109.163 port 58514 Jan 13 20:34:27.189784 sshd-session[2091]: pam_unix(sshd:session): session closed for user core Jan 13 20:34:27.198835 systemd[1]: sshd@6-138.199.152.196:22-147.75.109.163:58514.service: Deactivated successfully. Jan 13 20:34:27.200622 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:34:27.201812 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:34:27.202414 systemd-logind[1591]: Removed session 7. Jan 13 20:34:35.428061 kubelet[3072]: I0113 20:34:35.428021 3072 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 20:34:35.429442 containerd[1614]: time="2025-01-13T20:34:35.429243613Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:34:35.430798 kubelet[3072]: I0113 20:34:35.429978 3072 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 20:34:36.443547 kubelet[3072]: I0113 20:34:36.442456 3072 topology_manager.go:215] "Topology Admit Handler" podUID="60f4248d-f232-4e96-806f-9cc39d77f376" podNamespace="kube-system" podName="kube-proxy-qxkb6" Jan 13 20:34:36.508299 kubelet[3072]: I0113 20:34:36.508244 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/60f4248d-f232-4e96-806f-9cc39d77f376-kube-proxy\") pod \"kube-proxy-qxkb6\" (UID: \"60f4248d-f232-4e96-806f-9cc39d77f376\") " pod="kube-system/kube-proxy-qxkb6" Jan 13 20:34:36.508655 kubelet[3072]: I0113 20:34:36.508620 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60f4248d-f232-4e96-806f-9cc39d77f376-xtables-lock\") pod \"kube-proxy-qxkb6\" (UID: \"60f4248d-f232-4e96-806f-9cc39d77f376\") " pod="kube-system/kube-proxy-qxkb6" Jan 13 20:34:36.508766 kubelet[3072]: I0113 20:34:36.508756 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60f4248d-f232-4e96-806f-9cc39d77f376-lib-modules\") pod \"kube-proxy-qxkb6\" (UID: \"60f4248d-f232-4e96-806f-9cc39d77f376\") " pod="kube-system/kube-proxy-qxkb6" Jan 13 20:34:36.508873 kubelet[3072]: I0113 20:34:36.508863 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkv2c\" (UniqueName: \"kubernetes.io/projected/60f4248d-f232-4e96-806f-9cc39d77f376-kube-api-access-kkv2c\") pod \"kube-proxy-qxkb6\" (UID: \"60f4248d-f232-4e96-806f-9cc39d77f376\") " pod="kube-system/kube-proxy-qxkb6" Jan 13 20:34:36.579140 kubelet[3072]: I0113 20:34:36.579103 3072 topology_manager.go:215] "Topology Admit Handler" podUID="53b589dd-f75b-401b-a752-347e0d5e458c" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-q8xs7" Jan 13 20:34:36.611248 kubelet[3072]: I0113 20:34:36.609973 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2b2\" (UniqueName: \"kubernetes.io/projected/53b589dd-f75b-401b-a752-347e0d5e458c-kube-api-access-7d2b2\") pod \"tigera-operator-c7ccbd65-q8xs7\" (UID: \"53b589dd-f75b-401b-a752-347e0d5e458c\") " pod="tigera-operator/tigera-operator-c7ccbd65-q8xs7" Jan 13 20:34:36.611248 kubelet[3072]: I0113 20:34:36.610094 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53b589dd-f75b-401b-a752-347e0d5e458c-var-lib-calico\") pod \"tigera-operator-c7ccbd65-q8xs7\" (UID: \"53b589dd-f75b-401b-a752-347e0d5e458c\") " pod="tigera-operator/tigera-operator-c7ccbd65-q8xs7" Jan 13 20:34:36.753377 containerd[1614]: time="2025-01-13T20:34:36.753258263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxkb6,Uid:60f4248d-f232-4e96-806f-9cc39d77f376,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:36.780658 containerd[1614]: time="2025-01-13T20:34:36.780441468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:36.780658 containerd[1614]: time="2025-01-13T20:34:36.780601108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:36.780658 containerd[1614]: time="2025-01-13T20:34:36.780619988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:36.781426 containerd[1614]: time="2025-01-13T20:34:36.781341027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:36.823074 containerd[1614]: time="2025-01-13T20:34:36.822977654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qxkb6,Uid:60f4248d-f232-4e96-806f-9cc39d77f376,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6002b417cc144f9e63f39b465db761bf7060f1083f2b00dcda9d7e6326e9a48\"" Jan 13 20:34:36.828652 containerd[1614]: time="2025-01-13T20:34:36.828509606Z" level=info msg="CreateContainer within sandbox \"b6002b417cc144f9e63f39b465db761bf7060f1083f2b00dcda9d7e6326e9a48\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:34:36.843197 containerd[1614]: time="2025-01-13T20:34:36.843116988Z" level=info msg="CreateContainer within sandbox \"b6002b417cc144f9e63f39b465db761bf7060f1083f2b00dcda9d7e6326e9a48\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"236ab3de9b27f4be41cc67cd9b7cbc513acffbfba2cf0828e2065e65298d4a9e\"" Jan 13 20:34:36.846245 containerd[1614]: time="2025-01-13T20:34:36.845670464Z" level=info msg="StartContainer for \"236ab3de9b27f4be41cc67cd9b7cbc513acffbfba2cf0828e2065e65298d4a9e\"" Jan 13 20:34:36.885621 containerd[1614]: time="2025-01-13T20:34:36.885561493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-q8xs7,Uid:53b589dd-f75b-401b-a752-347e0d5e458c,Namespace:tigera-operator,Attempt:0,}" Jan 13 20:34:36.914657 containerd[1614]: time="2025-01-13T20:34:36.914574656Z" level=info msg="StartContainer for \"236ab3de9b27f4be41cc67cd9b7cbc513acffbfba2cf0828e2065e65298d4a9e\" returns successfully" Jan 13 20:34:36.924134 containerd[1614]: time="2025-01-13T20:34:36.923967604Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:36.924134 containerd[1614]: time="2025-01-13T20:34:36.924042084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:36.924134 containerd[1614]: time="2025-01-13T20:34:36.924058644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:36.924900 containerd[1614]: time="2025-01-13T20:34:36.924785883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:36.978567 containerd[1614]: time="2025-01-13T20:34:36.978446934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-q8xs7,Uid:53b589dd-f75b-401b-a752-347e0d5e458c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"41c425ca5d484d03f5b1fc2cebc6b41291460136038589b0a6f5081667b90f35\"" Jan 13 20:34:36.981903 containerd[1614]: time="2025-01-13T20:34:36.981843410Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 13 20:34:37.163028 kubelet[3072]: I0113 20:34:37.162616 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qxkb6" podStartSLOduration=1.162569461 podStartE2EDuration="1.162569461s" podCreationTimestamp="2025-01-13 20:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:34:37.160946743 +0000 UTC m=+15.255989148" watchObservedRunningTime="2025-01-13 20:34:37.162569461 +0000 UTC m=+15.257611826" Jan 13 20:34:38.958874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264334218.mount: Deactivated successfully. Jan 13 20:34:39.318637 containerd[1614]: time="2025-01-13T20:34:39.318553046Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:39.320409 containerd[1614]: time="2025-01-13T20:34:39.320328404Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125980" Jan 13 20:34:39.322479 containerd[1614]: time="2025-01-13T20:34:39.321844122Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:39.325365 containerd[1614]: time="2025-01-13T20:34:39.325289517Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:39.327170 containerd[1614]: time="2025-01-13T20:34:39.327106315Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.345214225s" Jan 13 20:34:39.327291 containerd[1614]: time="2025-01-13T20:34:39.327185235Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 13 20:34:39.333650 containerd[1614]: time="2025-01-13T20:34:39.333491747Z" level=info msg="CreateContainer within sandbox \"41c425ca5d484d03f5b1fc2cebc6b41291460136038589b0a6f5081667b90f35\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 20:34:39.356019 containerd[1614]: time="2025-01-13T20:34:39.355897320Z" level=info msg="CreateContainer within sandbox \"41c425ca5d484d03f5b1fc2cebc6b41291460136038589b0a6f5081667b90f35\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4107a36466146899871229a7550ffbecd241b82455c5b3229931b436f59a6ec0\"" Jan 13 20:34:39.358150 containerd[1614]: time="2025-01-13T20:34:39.358009197Z" level=info msg="StartContainer for \"4107a36466146899871229a7550ffbecd241b82455c5b3229931b436f59a6ec0\"" Jan 13 20:34:39.412168 containerd[1614]: time="2025-01-13T20:34:39.412123811Z" level=info msg="StartContainer for \"4107a36466146899871229a7550ffbecd241b82455c5b3229931b436f59a6ec0\" returns successfully" Jan 13 20:34:40.168699 kubelet[3072]: I0113 20:34:40.168568 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-q8xs7" podStartSLOduration=1.820715302 podStartE2EDuration="4.168505045s" podCreationTimestamp="2025-01-13 20:34:36 +0000 UTC" firstStartedPulling="2025-01-13 20:34:36.979814892 +0000 UTC m=+15.074857257" lastFinishedPulling="2025-01-13 20:34:39.327604635 +0000 UTC m=+17.422647000" observedRunningTime="2025-01-13 20:34:40.167171647 +0000 UTC m=+18.262214012" watchObservedRunningTime="2025-01-13 20:34:40.168505045 +0000 UTC m=+18.263547490" Jan 13 20:34:43.630251 kubelet[3072]: I0113 20:34:43.625357 3072 topology_manager.go:215] "Topology Admit Handler" podUID="5bc87553-5bc4-442d-894e-fcbb7a28a581" podNamespace="calico-system" podName="calico-typha-787f7fc5c9-7pfjz" Jan 13 20:34:43.755917 kubelet[3072]: I0113 20:34:43.755871 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5bc87553-5bc4-442d-894e-fcbb7a28a581-typha-certs\") pod \"calico-typha-787f7fc5c9-7pfjz\" (UID: \"5bc87553-5bc4-442d-894e-fcbb7a28a581\") " pod="calico-system/calico-typha-787f7fc5c9-7pfjz" Jan 13 20:34:43.755917 kubelet[3072]: I0113 20:34:43.755925 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bc87553-5bc4-442d-894e-fcbb7a28a581-tigera-ca-bundle\") pod \"calico-typha-787f7fc5c9-7pfjz\" (UID: \"5bc87553-5bc4-442d-894e-fcbb7a28a581\") " pod="calico-system/calico-typha-787f7fc5c9-7pfjz" Jan 13 20:34:43.756084 kubelet[3072]: I0113 20:34:43.755949 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6bxb\" (UniqueName: \"kubernetes.io/projected/5bc87553-5bc4-442d-894e-fcbb7a28a581-kube-api-access-b6bxb\") pod \"calico-typha-787f7fc5c9-7pfjz\" (UID: \"5bc87553-5bc4-442d-894e-fcbb7a28a581\") " pod="calico-system/calico-typha-787f7fc5c9-7pfjz" Jan 13 20:34:43.762040 kubelet[3072]: I0113 20:34:43.758827 3072 topology_manager.go:215] "Topology Admit Handler" podUID="1495e0c4-41ee-4247-818e-e7c78175ce7f" podNamespace="calico-system" podName="calico-node-mq9wk" Jan 13 20:34:43.893686 kubelet[3072]: I0113 20:34:43.893467 3072 topology_manager.go:215] "Topology Admit Handler" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" podNamespace="calico-system" podName="csi-node-driver-5hc2j" Jan 13 20:34:43.896022 kubelet[3072]: E0113 20:34:43.895078 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:43.936610 containerd[1614]: time="2025-01-13T20:34:43.936485138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787f7fc5c9-7pfjz,Uid:5bc87553-5bc4-442d-894e-fcbb7a28a581,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:43.958250 kubelet[3072]: I0113 20:34:43.958163 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-lib-modules\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.958549 kubelet[3072]: I0113 20:34:43.958525 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-xtables-lock\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.960719 kubelet[3072]: I0113 20:34:43.960582 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-policysync\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.960719 kubelet[3072]: I0113 20:34:43.960662 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-var-lib-calico\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.960719 kubelet[3072]: I0113 20:34:43.960689 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1495e0c4-41ee-4247-818e-e7c78175ce7f-node-certs\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.960931 kubelet[3072]: I0113 20:34:43.960916 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-var-run-calico\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.961043 kubelet[3072]: I0113 20:34:43.961033 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-cni-net-dir\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.961151 kubelet[3072]: I0113 20:34:43.961142 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-cni-bin-dir\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.961873 kubelet[3072]: I0113 20:34:43.961691 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-cni-log-dir\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.961873 kubelet[3072]: I0113 20:34:43.961761 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1495e0c4-41ee-4247-818e-e7c78175ce7f-tigera-ca-bundle\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.961873 kubelet[3072]: I0113 20:34:43.961788 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1495e0c4-41ee-4247-818e-e7c78175ce7f-flexvol-driver-host\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.961873 kubelet[3072]: I0113 20:34:43.961807 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26q8t\" (UniqueName: \"kubernetes.io/projected/1495e0c4-41ee-4247-818e-e7c78175ce7f-kube-api-access-26q8t\") pod \"calico-node-mq9wk\" (UID: \"1495e0c4-41ee-4247-818e-e7c78175ce7f\") " pod="calico-system/calico-node-mq9wk" Jan 13 20:34:43.981350 containerd[1614]: time="2025-01-13T20:34:43.976226092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:43.981350 containerd[1614]: time="2025-01-13T20:34:43.976284732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:43.981350 containerd[1614]: time="2025-01-13T20:34:43.976404772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:43.981350 containerd[1614]: time="2025-01-13T20:34:43.977334371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:44.065262 kubelet[3072]: I0113 20:34:44.062708 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d79efc93-b14e-4d5a-8c70-0155fb5a684a-varrun\") pod \"csi-node-driver-5hc2j\" (UID: \"d79efc93-b14e-4d5a-8c70-0155fb5a684a\") " pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:44.065262 kubelet[3072]: I0113 20:34:44.062777 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d79efc93-b14e-4d5a-8c70-0155fb5a684a-registration-dir\") pod \"csi-node-driver-5hc2j\" (UID: \"d79efc93-b14e-4d5a-8c70-0155fb5a684a\") " pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:44.065262 kubelet[3072]: I0113 20:34:44.062812 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d79efc93-b14e-4d5a-8c70-0155fb5a684a-kubelet-dir\") pod \"csi-node-driver-5hc2j\" (UID: \"d79efc93-b14e-4d5a-8c70-0155fb5a684a\") " pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:44.065262 kubelet[3072]: I0113 20:34:44.062906 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d79efc93-b14e-4d5a-8c70-0155fb5a684a-socket-dir\") pod \"csi-node-driver-5hc2j\" (UID: \"d79efc93-b14e-4d5a-8c70-0155fb5a684a\") " pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:44.065262 kubelet[3072]: I0113 20:34:44.062929 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwplz\" (UniqueName: \"kubernetes.io/projected/d79efc93-b14e-4d5a-8c70-0155fb5a684a-kube-api-access-pwplz\") pod \"csi-node-driver-5hc2j\" (UID: \"d79efc93-b14e-4d5a-8c70-0155fb5a684a\") " pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:44.068472 kubelet[3072]: E0113 20:34:44.068397 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.068707 kubelet[3072]: W0113 20:34:44.068588 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.068707 kubelet[3072]: E0113 20:34:44.068618 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.069406 kubelet[3072]: E0113 20:34:44.069287 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.069406 kubelet[3072]: W0113 20:34:44.069304 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.069406 kubelet[3072]: E0113 20:34:44.069322 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.070588 kubelet[3072]: E0113 20:34:44.070400 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.070588 kubelet[3072]: W0113 20:34:44.070414 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.070588 kubelet[3072]: E0113 20:34:44.070473 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.071392 kubelet[3072]: E0113 20:34:44.071377 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.071500 kubelet[3072]: W0113 20:34:44.071488 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.072265 kubelet[3072]: E0113 20:34:44.071555 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.072490 kubelet[3072]: E0113 20:34:44.072477 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.073260 kubelet[3072]: W0113 20:34:44.072554 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.073488 kubelet[3072]: E0113 20:34:44.073359 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.073671 kubelet[3072]: E0113 20:34:44.073659 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.073739 kubelet[3072]: W0113 20:34:44.073729 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.074312 kubelet[3072]: E0113 20:34:44.074297 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.076526 kubelet[3072]: E0113 20:34:44.076407 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.076526 kubelet[3072]: W0113 20:34:44.076469 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.076526 kubelet[3072]: E0113 20:34:44.076488 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.092597 kubelet[3072]: E0113 20:34:44.092506 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.092597 kubelet[3072]: W0113 20:34:44.092530 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.092597 kubelet[3072]: E0113 20:34:44.092554 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.166828 kubelet[3072]: E0113 20:34:44.165242 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.166828 kubelet[3072]: W0113 20:34:44.165271 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.167848 kubelet[3072]: E0113 20:34:44.166987 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.169756 kubelet[3072]: E0113 20:34:44.168260 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.169756 kubelet[3072]: W0113 20:34:44.168276 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.169756 kubelet[3072]: E0113 20:34:44.168301 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.171463 kubelet[3072]: E0113 20:34:44.171290 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.171463 kubelet[3072]: W0113 20:34:44.171310 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.171463 kubelet[3072]: E0113 20:34:44.171332 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.172316 kubelet[3072]: E0113 20:34:44.171997 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.172316 kubelet[3072]: W0113 20:34:44.172015 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.172316 kubelet[3072]: E0113 20:34:44.172042 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.174100 kubelet[3072]: E0113 20:34:44.173506 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.174100 kubelet[3072]: W0113 20:34:44.173522 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.174100 kubelet[3072]: E0113 20:34:44.173542 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.175361 kubelet[3072]: E0113 20:34:44.174740 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.175361 kubelet[3072]: W0113 20:34:44.174756 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.175361 kubelet[3072]: E0113 20:34:44.175324 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.176136 kubelet[3072]: E0113 20:34:44.175943 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.176136 kubelet[3072]: W0113 20:34:44.175960 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.176521 kubelet[3072]: E0113 20:34:44.176313 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.177076 kubelet[3072]: E0113 20:34:44.176964 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.177076 kubelet[3072]: W0113 20:34:44.176993 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.177327 kubelet[3072]: E0113 20:34:44.177231 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.177908 kubelet[3072]: E0113 20:34:44.177667 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.177908 kubelet[3072]: W0113 20:34:44.177680 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.177908 kubelet[3072]: E0113 20:34:44.177844 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.178576 kubelet[3072]: E0113 20:34:44.178376 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.178576 kubelet[3072]: W0113 20:34:44.178389 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.178576 kubelet[3072]: E0113 20:34:44.178545 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.179176 kubelet[3072]: E0113 20:34:44.179016 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.179176 kubelet[3072]: W0113 20:34:44.179030 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.179176 kubelet[3072]: E0113 20:34:44.179119 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.179799 kubelet[3072]: E0113 20:34:44.179545 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.179799 kubelet[3072]: W0113 20:34:44.179557 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.179978 kubelet[3072]: E0113 20:34:44.179918 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.180274 kubelet[3072]: E0113 20:34:44.180153 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.180274 kubelet[3072]: W0113 20:34:44.180166 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.180520 kubelet[3072]: E0113 20:34:44.180353 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.180770 kubelet[3072]: E0113 20:34:44.180695 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.180770 kubelet[3072]: W0113 20:34:44.180707 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.180770 kubelet[3072]: E0113 20:34:44.180754 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.181143 kubelet[3072]: E0113 20:34:44.181094 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.181143 kubelet[3072]: W0113 20:34:44.181105 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.181322 kubelet[3072]: E0113 20:34:44.181244 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.181539 kubelet[3072]: E0113 20:34:44.181527 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.181670 kubelet[3072]: W0113 20:34:44.181593 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.181670 kubelet[3072]: E0113 20:34:44.181646 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.181952 kubelet[3072]: E0113 20:34:44.181883 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.181952 kubelet[3072]: W0113 20:34:44.181894 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.182181 kubelet[3072]: E0113 20:34:44.182105 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.182416 kubelet[3072]: E0113 20:34:44.182307 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.182416 kubelet[3072]: W0113 20:34:44.182319 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.182939 kubelet[3072]: E0113 20:34:44.182726 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.183382 kubelet[3072]: E0113 20:34:44.183292 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.183382 kubelet[3072]: W0113 20:34:44.183306 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.183864 kubelet[3072]: E0113 20:34:44.183622 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.184692 kubelet[3072]: E0113 20:34:44.184396 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.184692 kubelet[3072]: W0113 20:34:44.184412 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.185015 kubelet[3072]: E0113 20:34:44.184892 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.185708 kubelet[3072]: E0113 20:34:44.185521 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.185708 kubelet[3072]: W0113 20:34:44.185537 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.185956 kubelet[3072]: E0113 20:34:44.185835 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.186078 kubelet[3072]: E0113 20:34:44.186054 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.186078 kubelet[3072]: W0113 20:34:44.186065 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.186579 kubelet[3072]: E0113 20:34:44.186354 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.187091 kubelet[3072]: E0113 20:34:44.186973 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.187091 kubelet[3072]: W0113 20:34:44.186986 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.187274 kubelet[3072]: E0113 20:34:44.187191 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.187888 kubelet[3072]: E0113 20:34:44.187857 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.188281 kubelet[3072]: W0113 20:34:44.188148 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.188725 kubelet[3072]: E0113 20:34:44.188460 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.189493 kubelet[3072]: E0113 20:34:44.189333 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.189983 kubelet[3072]: W0113 20:34:44.189652 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.189983 kubelet[3072]: E0113 20:34:44.189674 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.213325 kubelet[3072]: E0113 20:34:44.213163 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:44.213325 kubelet[3072]: W0113 20:34:44.213185 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:44.213325 kubelet[3072]: E0113 20:34:44.213285 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:44.216515 containerd[1614]: time="2025-01-13T20:34:44.216318176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-787f7fc5c9-7pfjz,Uid:5bc87553-5bc4-442d-894e-fcbb7a28a581,Namespace:calico-system,Attempt:0,} returns sandbox id \"328e87a37b836d410e8f5f17b2c111a5e2c66e5268273c554d1b752ede4ae9a4\"" Jan 13 20:34:44.218896 containerd[1614]: time="2025-01-13T20:34:44.218459934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 13 20:34:44.366854 containerd[1614]: time="2025-01-13T20:34:44.366796324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mq9wk,Uid:1495e0c4-41ee-4247-818e-e7c78175ce7f,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:44.403033 containerd[1614]: time="2025-01-13T20:34:44.402054723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:34:44.403033 containerd[1614]: time="2025-01-13T20:34:44.402182323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:34:44.403033 containerd[1614]: time="2025-01-13T20:34:44.402359603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:44.405864 containerd[1614]: time="2025-01-13T20:34:44.404368081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:34:44.465531 containerd[1614]: time="2025-01-13T20:34:44.464979531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mq9wk,Uid:1495e0c4-41ee-4247-818e-e7c78175ce7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbbb61895d5d7d64866f53b2cc6590278426d1d7b39db309de3c5b509d5b0c57\"" Jan 13 20:34:45.953034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3591899981.mount: Deactivated successfully. Jan 13 20:34:46.023823 kubelet[3072]: E0113 20:34:46.023536 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:46.883338 containerd[1614]: time="2025-01-13T20:34:46.883279597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:46.884916 containerd[1614]: time="2025-01-13T20:34:46.884740036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 13 20:34:46.886877 containerd[1614]: time="2025-01-13T20:34:46.886722074Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:46.891291 containerd[1614]: time="2025-01-13T20:34:46.890283230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:46.892227 containerd[1614]: time="2025-01-13T20:34:46.892171588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.673668334s" Jan 13 20:34:46.892439 containerd[1614]: time="2025-01-13T20:34:46.892416827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 13 20:34:46.895299 containerd[1614]: time="2025-01-13T20:34:46.895094984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 13 20:34:46.914128 containerd[1614]: time="2025-01-13T20:34:46.914087763Z" level=info msg="CreateContainer within sandbox \"328e87a37b836d410e8f5f17b2c111a5e2c66e5268273c554d1b752ede4ae9a4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 20:34:46.939423 containerd[1614]: time="2025-01-13T20:34:46.939344735Z" level=info msg="CreateContainer within sandbox \"328e87a37b836d410e8f5f17b2c111a5e2c66e5268273c554d1b752ede4ae9a4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"91b64223f732357442aed36159d205d920d3765b822ca1af3e5d04dc228b2e63\"" Jan 13 20:34:46.940513 containerd[1614]: time="2025-01-13T20:34:46.940342614Z" level=info msg="StartContainer for \"91b64223f732357442aed36159d205d920d3765b822ca1af3e5d04dc228b2e63\"" Jan 13 20:34:47.021518 containerd[1614]: time="2025-01-13T20:34:47.021341443Z" level=info msg="StartContainer for \"91b64223f732357442aed36159d205d920d3765b822ca1af3e5d04dc228b2e63\" returns successfully" Jan 13 20:34:47.286681 kubelet[3072]: E0113 20:34:47.286646 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.286681 kubelet[3072]: W0113 20:34:47.286678 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.287439 kubelet[3072]: E0113 20:34:47.286710 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.287439 kubelet[3072]: E0113 20:34:47.286978 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.287439 kubelet[3072]: W0113 20:34:47.286990 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.287439 kubelet[3072]: E0113 20:34:47.287011 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.287439 kubelet[3072]: E0113 20:34:47.287302 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.287439 kubelet[3072]: W0113 20:34:47.287315 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.287439 kubelet[3072]: E0113 20:34:47.287333 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.287699 kubelet[3072]: E0113 20:34:47.287595 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.287699 kubelet[3072]: W0113 20:34:47.287606 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.287699 kubelet[3072]: E0113 20:34:47.287620 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.287928 kubelet[3072]: E0113 20:34:47.287887 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.287928 kubelet[3072]: W0113 20:34:47.287898 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.287928 kubelet[3072]: E0113 20:34:47.287914 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.288072 kubelet[3072]: E0113 20:34:47.288066 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.288102 kubelet[3072]: W0113 20:34:47.288075 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.288102 kubelet[3072]: E0113 20:34:47.288087 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.288445 kubelet[3072]: E0113 20:34:47.288250 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.288445 kubelet[3072]: W0113 20:34:47.288259 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.288445 kubelet[3072]: E0113 20:34:47.288270 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.288445 kubelet[3072]: E0113 20:34:47.288434 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.288445 kubelet[3072]: W0113 20:34:47.288442 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.288637 kubelet[3072]: E0113 20:34:47.288455 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.288637 kubelet[3072]: E0113 20:34:47.288611 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.288637 kubelet[3072]: W0113 20:34:47.288619 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.288637 kubelet[3072]: E0113 20:34:47.288630 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.288776 kubelet[3072]: E0113 20:34:47.288754 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.288776 kubelet[3072]: W0113 20:34:47.288761 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.288776 kubelet[3072]: E0113 20:34:47.288772 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.289656 kubelet[3072]: E0113 20:34:47.288961 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.289656 kubelet[3072]: W0113 20:34:47.288978 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.289656 kubelet[3072]: E0113 20:34:47.288994 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.289656 kubelet[3072]: E0113 20:34:47.289134 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.289656 kubelet[3072]: W0113 20:34:47.289141 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.289656 kubelet[3072]: E0113 20:34:47.289152 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.289656 kubelet[3072]: E0113 20:34:47.289318 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.289656 kubelet[3072]: W0113 20:34:47.289326 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.289656 kubelet[3072]: E0113 20:34:47.289338 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.289656 kubelet[3072]: E0113 20:34:47.289492 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.289925 kubelet[3072]: W0113 20:34:47.289501 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.289925 kubelet[3072]: E0113 20:34:47.289515 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.289925 kubelet[3072]: E0113 20:34:47.289652 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.289925 kubelet[3072]: W0113 20:34:47.289659 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.289925 kubelet[3072]: E0113 20:34:47.289670 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.297199 kubelet[3072]: E0113 20:34:47.297162 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.297199 kubelet[3072]: W0113 20:34:47.297193 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.297199 kubelet[3072]: E0113 20:34:47.297262 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.297812 kubelet[3072]: E0113 20:34:47.297599 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.297812 kubelet[3072]: W0113 20:34:47.297610 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.297812 kubelet[3072]: E0113 20:34:47.297640 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.298181 kubelet[3072]: E0113 20:34:47.297844 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.298181 kubelet[3072]: W0113 20:34:47.297852 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.298181 kubelet[3072]: E0113 20:34:47.297874 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.298181 kubelet[3072]: E0113 20:34:47.298151 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.298181 kubelet[3072]: W0113 20:34:47.298162 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.298181 kubelet[3072]: E0113 20:34:47.298176 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.298531 kubelet[3072]: E0113 20:34:47.298397 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.298531 kubelet[3072]: W0113 20:34:47.298420 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.298531 kubelet[3072]: E0113 20:34:47.298444 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.298718 kubelet[3072]: E0113 20:34:47.298589 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.298718 kubelet[3072]: W0113 20:34:47.298598 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.298718 kubelet[3072]: E0113 20:34:47.298613 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.298898 kubelet[3072]: E0113 20:34:47.298803 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.298898 kubelet[3072]: W0113 20:34:47.298812 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.298898 kubelet[3072]: E0113 20:34:47.298831 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.299695 kubelet[3072]: E0113 20:34:47.299381 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.299695 kubelet[3072]: W0113 20:34:47.299631 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.299695 kubelet[3072]: E0113 20:34:47.299660 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.299848 kubelet[3072]: E0113 20:34:47.299827 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.299848 kubelet[3072]: W0113 20:34:47.299844 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.299905 kubelet[3072]: E0113 20:34:47.299862 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.300016 kubelet[3072]: E0113 20:34:47.300004 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.300043 kubelet[3072]: W0113 20:34:47.300017 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.300043 kubelet[3072]: E0113 20:34:47.300031 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.300185 kubelet[3072]: E0113 20:34:47.300171 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.300185 kubelet[3072]: W0113 20:34:47.300183 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.300388 kubelet[3072]: E0113 20:34:47.300257 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.300388 kubelet[3072]: E0113 20:34:47.300338 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.300388 kubelet[3072]: W0113 20:34:47.300346 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.300525 kubelet[3072]: E0113 20:34:47.300425 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.300525 kubelet[3072]: E0113 20:34:47.300540 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.300606 kubelet[3072]: W0113 20:34:47.300547 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.300606 kubelet[3072]: E0113 20:34:47.300561 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.300723 kubelet[3072]: E0113 20:34:47.300709 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.300723 kubelet[3072]: W0113 20:34:47.300722 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.300831 kubelet[3072]: E0113 20:34:47.300737 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.301086 kubelet[3072]: E0113 20:34:47.301072 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.301330 kubelet[3072]: W0113 20:34:47.301162 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.301330 kubelet[3072]: E0113 20:34:47.301198 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.301444 kubelet[3072]: E0113 20:34:47.301354 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.301444 kubelet[3072]: W0113 20:34:47.301363 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.301444 kubelet[3072]: E0113 20:34:47.301377 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.301730 kubelet[3072]: E0113 20:34:47.301702 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.301730 kubelet[3072]: W0113 20:34:47.301721 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.302038 kubelet[3072]: E0113 20:34:47.301736 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:47.302134 kubelet[3072]: E0113 20:34:47.302120 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:47.302182 kubelet[3072]: W0113 20:34:47.302171 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:47.302250 kubelet[3072]: E0113 20:34:47.302240 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.024750 kubelet[3072]: E0113 20:34:48.024709 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:48.190093 kubelet[3072]: I0113 20:34:48.190061 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:34:48.195818 kubelet[3072]: E0113 20:34:48.195660 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.195818 kubelet[3072]: W0113 20:34:48.195694 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.195818 kubelet[3072]: E0113 20:34:48.195731 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.196271 kubelet[3072]: E0113 20:34:48.196055 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.196271 kubelet[3072]: W0113 20:34:48.196071 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.196271 kubelet[3072]: E0113 20:34:48.196093 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.196613 kubelet[3072]: E0113 20:34:48.196388 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.196613 kubelet[3072]: W0113 20:34:48.196441 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.196613 kubelet[3072]: E0113 20:34:48.196463 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.196836 kubelet[3072]: E0113 20:34:48.196743 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.196836 kubelet[3072]: W0113 20:34:48.196760 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.196836 kubelet[3072]: E0113 20:34:48.196788 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.197144 kubelet[3072]: E0113 20:34:48.197125 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.197229 kubelet[3072]: W0113 20:34:48.197145 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.197229 kubelet[3072]: E0113 20:34:48.197166 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.197475 kubelet[3072]: E0113 20:34:48.197431 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.197475 kubelet[3072]: W0113 20:34:48.197445 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.197475 kubelet[3072]: E0113 20:34:48.197466 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.197705 kubelet[3072]: E0113 20:34:48.197692 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.197738 kubelet[3072]: W0113 20:34:48.197708 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.197763 kubelet[3072]: E0113 20:34:48.197727 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.198041 kubelet[3072]: E0113 20:34:48.198023 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.198084 kubelet[3072]: W0113 20:34:48.198044 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.198084 kubelet[3072]: E0113 20:34:48.198065 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.198473 kubelet[3072]: E0113 20:34:48.198456 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.198531 kubelet[3072]: W0113 20:34:48.198475 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.198531 kubelet[3072]: E0113 20:34:48.198498 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.198754 kubelet[3072]: E0113 20:34:48.198740 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.198787 kubelet[3072]: W0113 20:34:48.198759 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.198787 kubelet[3072]: E0113 20:34:48.198783 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.199022 kubelet[3072]: E0113 20:34:48.198998 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.199068 kubelet[3072]: W0113 20:34:48.199030 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.199068 kubelet[3072]: E0113 20:34:48.199052 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.199407 kubelet[3072]: E0113 20:34:48.199380 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.199457 kubelet[3072]: W0113 20:34:48.199413 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.199457 kubelet[3072]: E0113 20:34:48.199436 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.199783 kubelet[3072]: E0113 20:34:48.199720 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.199822 kubelet[3072]: W0113 20:34:48.199789 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.199822 kubelet[3072]: E0113 20:34:48.199812 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.200266 kubelet[3072]: E0113 20:34:48.200243 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.200324 kubelet[3072]: W0113 20:34:48.200268 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.200324 kubelet[3072]: E0113 20:34:48.200295 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.200606 kubelet[3072]: E0113 20:34:48.200588 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.200645 kubelet[3072]: W0113 20:34:48.200609 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.200645 kubelet[3072]: E0113 20:34:48.200634 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.202907 kubelet[3072]: E0113 20:34:48.202815 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.202907 kubelet[3072]: W0113 20:34:48.202829 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.202907 kubelet[3072]: E0113 20:34:48.202842 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.203049 kubelet[3072]: E0113 20:34:48.203033 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.203049 kubelet[3072]: W0113 20:34:48.203046 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.203101 kubelet[3072]: E0113 20:34:48.203062 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.203235 kubelet[3072]: E0113 20:34:48.203217 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.203235 kubelet[3072]: W0113 20:34:48.203231 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.203294 kubelet[3072]: E0113 20:34:48.203253 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.203535 kubelet[3072]: E0113 20:34:48.203522 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.203535 kubelet[3072]: W0113 20:34:48.203535 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.203600 kubelet[3072]: E0113 20:34:48.203557 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.203750 kubelet[3072]: E0113 20:34:48.203739 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.203750 kubelet[3072]: W0113 20:34:48.203750 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.203820 kubelet[3072]: E0113 20:34:48.203765 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.203933 kubelet[3072]: E0113 20:34:48.203920 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.203933 kubelet[3072]: W0113 20:34:48.203931 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.203992 kubelet[3072]: E0113 20:34:48.203948 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.204705 kubelet[3072]: E0113 20:34:48.204681 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.204705 kubelet[3072]: W0113 20:34:48.204702 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.204812 kubelet[3072]: E0113 20:34:48.204798 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.205189 kubelet[3072]: E0113 20:34:48.205172 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.205247 kubelet[3072]: W0113 20:34:48.205189 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.205326 kubelet[3072]: E0113 20:34:48.205293 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.205498 kubelet[3072]: E0113 20:34:48.205389 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.205498 kubelet[3072]: W0113 20:34:48.205423 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.205498 kubelet[3072]: E0113 20:34:48.205472 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.205759 kubelet[3072]: E0113 20:34:48.205611 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.205759 kubelet[3072]: W0113 20:34:48.205628 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.205759 kubelet[3072]: E0113 20:34:48.205646 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.205940 kubelet[3072]: E0113 20:34:48.205925 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.205974 kubelet[3072]: W0113 20:34:48.205942 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.205974 kubelet[3072]: E0113 20:34:48.205961 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.206257 kubelet[3072]: E0113 20:34:48.206200 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.206307 kubelet[3072]: W0113 20:34:48.206258 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.206307 kubelet[3072]: E0113 20:34:48.206280 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.206554 kubelet[3072]: E0113 20:34:48.206539 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.206599 kubelet[3072]: W0113 20:34:48.206556 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.206599 kubelet[3072]: E0113 20:34:48.206582 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.207039 kubelet[3072]: E0113 20:34:48.207023 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.207077 kubelet[3072]: W0113 20:34:48.207038 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.207077 kubelet[3072]: E0113 20:34:48.207057 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.207229 kubelet[3072]: E0113 20:34:48.207195 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.207267 kubelet[3072]: W0113 20:34:48.207230 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.207267 kubelet[3072]: E0113 20:34:48.207250 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.207467 kubelet[3072]: E0113 20:34:48.207455 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.207467 kubelet[3072]: W0113 20:34:48.207467 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.207527 kubelet[3072]: E0113 20:34:48.207487 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.207769 kubelet[3072]: E0113 20:34:48.207757 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.207769 kubelet[3072]: W0113 20:34:48.207768 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.207836 kubelet[3072]: E0113 20:34:48.207783 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.207963 kubelet[3072]: E0113 20:34:48.207936 3072 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 20:34:48.207963 kubelet[3072]: W0113 20:34:48.207948 3072 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 20:34:48.207963 kubelet[3072]: E0113 20:34:48.207959 3072 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 20:34:48.429301 containerd[1614]: time="2025-01-13T20:34:48.428123056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:48.431007 containerd[1614]: time="2025-01-13T20:34:48.429756094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 13 20:34:48.432010 containerd[1614]: time="2025-01-13T20:34:48.431954971Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:48.437466 containerd[1614]: time="2025-01-13T20:34:48.437390805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:48.438976 containerd[1614]: time="2025-01-13T20:34:48.438899684Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.54375194s" Jan 13 20:34:48.438976 containerd[1614]: time="2025-01-13T20:34:48.438960844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 13 20:34:48.441421 containerd[1614]: time="2025-01-13T20:34:48.441342081Z" level=info msg="CreateContainer within sandbox \"fbbb61895d5d7d64866f53b2cc6590278426d1d7b39db309de3c5b509d5b0c57\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 20:34:48.457178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558102990.mount: Deactivated successfully. Jan 13 20:34:48.464079 containerd[1614]: time="2025-01-13T20:34:48.464029896Z" level=info msg="CreateContainer within sandbox \"fbbb61895d5d7d64866f53b2cc6590278426d1d7b39db309de3c5b509d5b0c57\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f819dbbe8320876fa64c148a75f7ccbc6ecdede6e51f29b89227495d95f21d25\"" Jan 13 20:34:48.465891 containerd[1614]: time="2025-01-13T20:34:48.465757735Z" level=info msg="StartContainer for \"f819dbbe8320876fa64c148a75f7ccbc6ecdede6e51f29b89227495d95f21d25\"" Jan 13 20:34:48.552260 containerd[1614]: time="2025-01-13T20:34:48.549468843Z" level=info msg="StartContainer for \"f819dbbe8320876fa64c148a75f7ccbc6ecdede6e51f29b89227495d95f21d25\" returns successfully" Jan 13 20:34:48.729831 containerd[1614]: time="2025-01-13T20:34:48.729728087Z" level=info msg="shim disconnected" id=f819dbbe8320876fa64c148a75f7ccbc6ecdede6e51f29b89227495d95f21d25 namespace=k8s.io Jan 13 20:34:48.729831 containerd[1614]: time="2025-01-13T20:34:48.729799326Z" level=warning msg="cleaning up after shim disconnected" id=f819dbbe8320876fa64c148a75f7ccbc6ecdede6e51f29b89227495d95f21d25 namespace=k8s.io Jan 13 20:34:48.729831 containerd[1614]: time="2025-01-13T20:34:48.729809126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:48.742975 containerd[1614]: time="2025-01-13T20:34:48.742807872Z" level=warning msg="cleanup warnings time=\"2025-01-13T20:34:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 20:34:48.911799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f819dbbe8320876fa64c148a75f7ccbc6ecdede6e51f29b89227495d95f21d25-rootfs.mount: Deactivated successfully. Jan 13 20:34:49.199921 containerd[1614]: time="2025-01-13T20:34:49.199786496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 13 20:34:49.216182 kubelet[3072]: I0113 20:34:49.216095 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-787f7fc5c9-7pfjz" podStartSLOduration=3.540165347 podStartE2EDuration="6.216055239s" podCreationTimestamp="2025-01-13 20:34:43 +0000 UTC" firstStartedPulling="2025-01-13 20:34:44.218009734 +0000 UTC m=+22.313052099" lastFinishedPulling="2025-01-13 20:34:46.893899626 +0000 UTC m=+24.988941991" observedRunningTime="2025-01-13 20:34:47.201713084 +0000 UTC m=+25.296755449" watchObservedRunningTime="2025-01-13 20:34:49.216055239 +0000 UTC m=+27.311097604" Jan 13 20:34:50.024516 kubelet[3072]: E0113 20:34:50.023287 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:52.025013 kubelet[3072]: E0113 20:34:52.023792 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:53.659991 containerd[1614]: time="2025-01-13T20:34:53.659920313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:53.662710 containerd[1614]: time="2025-01-13T20:34:53.662586071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 13 20:34:53.663523 containerd[1614]: time="2025-01-13T20:34:53.663421790Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:53.667661 containerd[1614]: time="2025-01-13T20:34:53.667178466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:34:53.668500 containerd[1614]: time="2025-01-13T20:34:53.668461585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.468632089s" Jan 13 20:34:53.668620 containerd[1614]: time="2025-01-13T20:34:53.668601504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 13 20:34:53.672770 containerd[1614]: time="2025-01-13T20:34:53.672723180Z" level=info msg="CreateContainer within sandbox \"fbbb61895d5d7d64866f53b2cc6590278426d1d7b39db309de3c5b509d5b0c57\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 20:34:53.688070 containerd[1614]: time="2025-01-13T20:34:53.687994124Z" level=info msg="CreateContainer within sandbox \"fbbb61895d5d7d64866f53b2cc6590278426d1d7b39db309de3c5b509d5b0c57\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d323b396f5d26a95611f262131cb1823b54bfda4ee051a3ff76a7de6770d0020\"" Jan 13 20:34:53.689304 containerd[1614]: time="2025-01-13T20:34:53.689222963Z" level=info msg="StartContainer for \"d323b396f5d26a95611f262131cb1823b54bfda4ee051a3ff76a7de6770d0020\"" Jan 13 20:34:53.757525 containerd[1614]: time="2025-01-13T20:34:53.757448653Z" level=info msg="StartContainer for \"d323b396f5d26a95611f262131cb1823b54bfda4ee051a3ff76a7de6770d0020\" returns successfully" Jan 13 20:34:54.025999 kubelet[3072]: E0113 20:34:54.025955 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:54.325403 containerd[1614]: time="2025-01-13T20:34:54.323521833Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:34:54.356405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d323b396f5d26a95611f262131cb1823b54bfda4ee051a3ff76a7de6770d0020-rootfs.mount: Deactivated successfully. Jan 13 20:34:54.401869 kubelet[3072]: I0113 20:34:54.401827 3072 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:34:54.436824 kubelet[3072]: I0113 20:34:54.436718 3072 topology_manager.go:215] "Topology Admit Handler" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" podNamespace="calico-system" podName="calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:54.446992 kubelet[3072]: I0113 20:34:54.446868 3072 topology_manager.go:215] "Topology Admit Handler" podUID="70116092-cc29-4334-811e-6f8b5c36f3c0" podNamespace="kube-system" podName="coredns-76f75df574-tdc7v" Jan 13 20:34:54.447159 kubelet[3072]: I0113 20:34:54.447043 3072 topology_manager.go:215] "Topology Admit Handler" podUID="5418ad94-7e2e-4821-8b2b-1361c7326bfb" podNamespace="calico-apiserver" podName="calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:54.454572 kubelet[3072]: W0113 20:34:54.454531 3072 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152-2-0-6-5d4da4afb6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-5d4da4afb6' and this object Jan 13 20:34:54.454572 kubelet[3072]: E0113 20:34:54.454575 3072 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4152-2-0-6-5d4da4afb6" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4152-2-0-6-5d4da4afb6' and this object Jan 13 20:34:54.454750 kubelet[3072]: W0113 20:34:54.454612 3072 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152-2-0-6-5d4da4afb6" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4152-2-0-6-5d4da4afb6' and this object Jan 13 20:34:54.454750 kubelet[3072]: E0113 20:34:54.454622 3072 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4152-2-0-6-5d4da4afb6" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4152-2-0-6-5d4da4afb6' and this object Jan 13 20:34:54.459866 kubelet[3072]: I0113 20:34:54.459131 3072 topology_manager.go:215] "Topology Admit Handler" podUID="a019a359-d21d-4317-8d1c-bd6d76806eac" podNamespace="kube-system" podName="coredns-76f75df574-d54ws" Jan 13 20:34:54.459866 kubelet[3072]: I0113 20:34:54.459332 3072 topology_manager.go:215] "Topology Admit Handler" podUID="a12675bf-0fbc-45d7-9f8f-c29f2b87c216" podNamespace="calico-apiserver" podName="calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:54.507132 containerd[1614]: time="2025-01-13T20:34:54.507047646Z" level=info msg="shim disconnected" id=d323b396f5d26a95611f262131cb1823b54bfda4ee051a3ff76a7de6770d0020 namespace=k8s.io Jan 13 20:34:54.507408 containerd[1614]: time="2025-01-13T20:34:54.507386726Z" level=warning msg="cleaning up after shim disconnected" id=d323b396f5d26a95611f262131cb1823b54bfda4ee051a3ff76a7de6770d0020 namespace=k8s.io Jan 13 20:34:54.507503 containerd[1614]: time="2025-01-13T20:34:54.507486886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:34:54.555012 kubelet[3072]: I0113 20:34:54.554445 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70116092-cc29-4334-811e-6f8b5c36f3c0-config-volume\") pod \"coredns-76f75df574-tdc7v\" (UID: \"70116092-cc29-4334-811e-6f8b5c36f3c0\") " pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:54.555012 kubelet[3072]: I0113 20:34:54.554507 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4z98\" (UniqueName: \"kubernetes.io/projected/a019a359-d21d-4317-8d1c-bd6d76806eac-kube-api-access-x4z98\") pod \"coredns-76f75df574-d54ws\" (UID: \"a019a359-d21d-4317-8d1c-bd6d76806eac\") " pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:54.555012 kubelet[3072]: I0113 20:34:54.554812 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8hgn\" (UniqueName: \"kubernetes.io/projected/70116092-cc29-4334-811e-6f8b5c36f3c0-kube-api-access-v8hgn\") pod \"coredns-76f75df574-tdc7v\" (UID: \"70116092-cc29-4334-811e-6f8b5c36f3c0\") " pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:54.555012 kubelet[3072]: I0113 20:34:54.554913 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34cc4b95-fb25-4302-8175-d6695afbf832-tigera-ca-bundle\") pod \"calico-kube-controllers-689464bd4f-rlk45\" (UID: \"34cc4b95-fb25-4302-8175-d6695afbf832\") " pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:54.555012 kubelet[3072]: I0113 20:34:54.554983 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a12675bf-0fbc-45d7-9f8f-c29f2b87c216-calico-apiserver-certs\") pod \"calico-apiserver-6ccf4fbb57-j85nm\" (UID: \"a12675bf-0fbc-45d7-9f8f-c29f2b87c216\") " pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:54.555546 kubelet[3072]: I0113 20:34:54.555072 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8plpp\" (UniqueName: \"kubernetes.io/projected/a12675bf-0fbc-45d7-9f8f-c29f2b87c216-kube-api-access-8plpp\") pod \"calico-apiserver-6ccf4fbb57-j85nm\" (UID: \"a12675bf-0fbc-45d7-9f8f-c29f2b87c216\") " pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:54.555546 kubelet[3072]: I0113 20:34:54.555117 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bskts\" (UniqueName: \"kubernetes.io/projected/34cc4b95-fb25-4302-8175-d6695afbf832-kube-api-access-bskts\") pod \"calico-kube-controllers-689464bd4f-rlk45\" (UID: \"34cc4b95-fb25-4302-8175-d6695afbf832\") " pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:54.555546 kubelet[3072]: I0113 20:34:54.555181 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7wr2\" (UniqueName: \"kubernetes.io/projected/5418ad94-7e2e-4821-8b2b-1361c7326bfb-kube-api-access-k7wr2\") pod \"calico-apiserver-6ccf4fbb57-gbjkq\" (UID: \"5418ad94-7e2e-4821-8b2b-1361c7326bfb\") " pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:54.555546 kubelet[3072]: I0113 20:34:54.555294 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5418ad94-7e2e-4821-8b2b-1361c7326bfb-calico-apiserver-certs\") pod \"calico-apiserver-6ccf4fbb57-gbjkq\" (UID: \"5418ad94-7e2e-4821-8b2b-1361c7326bfb\") " pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:54.555546 kubelet[3072]: I0113 20:34:54.555429 3072 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a019a359-d21d-4317-8d1c-bd6d76806eac-config-volume\") pod \"coredns-76f75df574-d54ws\" (UID: \"a019a359-d21d-4317-8d1c-bd6d76806eac\") " pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:54.763161 containerd[1614]: time="2025-01-13T20:34:54.763106945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:54.851378 containerd[1614]: time="2025-01-13T20:34:54.850765936Z" level=error msg="Failed to destroy network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.851378 containerd[1614]: time="2025-01-13T20:34:54.851260735Z" level=error msg="encountered an error cleaning up failed sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.851693 containerd[1614]: time="2025-01-13T20:34:54.851545295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.851843 kubelet[3072]: E0113 20:34:54.851808 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:54.851909 kubelet[3072]: E0113 20:34:54.851880 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:54.851909 kubelet[3072]: E0113 20:34:54.851902 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:54.851970 kubelet[3072]: E0113 20:34:54.851957 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" Jan 13 20:34:55.217513 containerd[1614]: time="2025-01-13T20:34:55.217392045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 13 20:34:55.219131 kubelet[3072]: I0113 20:34:55.219080 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18" Jan 13 20:34:55.221111 containerd[1614]: time="2025-01-13T20:34:55.219982882Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:34:55.221111 containerd[1614]: time="2025-01-13T20:34:55.220151922Z" level=info msg="Ensure that sandbox 3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18 in task-service has been cleanup successfully" Jan 13 20:34:55.223002 containerd[1614]: time="2025-01-13T20:34:55.222947959Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:34:55.223002 containerd[1614]: time="2025-01-13T20:34:55.222994959Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:34:55.224729 containerd[1614]: time="2025-01-13T20:34:55.223816558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:1,}" Jan 13 20:34:55.293883 containerd[1614]: time="2025-01-13T20:34:55.293653728Z" level=error msg="Failed to destroy network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.294452 containerd[1614]: time="2025-01-13T20:34:55.294296087Z" level=error msg="encountered an error cleaning up failed sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.294452 containerd[1614]: time="2025-01-13T20:34:55.294388727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.295897 kubelet[3072]: E0113 20:34:55.294797 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:55.295897 kubelet[3072]: E0113 20:34:55.294866 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:55.295897 kubelet[3072]: E0113 20:34:55.294888 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:55.296099 kubelet[3072]: E0113 20:34:55.294943 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" Jan 13 20:34:55.658951 kubelet[3072]: E0113 20:34:55.658509 3072 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.658951 kubelet[3072]: E0113 20:34:55.658624 3072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a019a359-d21d-4317-8d1c-bd6d76806eac-config-volume podName:a019a359-d21d-4317-8d1c-bd6d76806eac nodeName:}" failed. No retries permitted until 2025-01-13 20:34:56.15859632 +0000 UTC m=+34.253638685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a019a359-d21d-4317-8d1c-bd6d76806eac-config-volume") pod "coredns-76f75df574-d54ws" (UID: "a019a359-d21d-4317-8d1c-bd6d76806eac") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.660165 kubelet[3072]: E0113 20:34:55.660123 3072 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.661382 kubelet[3072]: E0113 20:34:55.660468 3072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/70116092-cc29-4334-811e-6f8b5c36f3c0-config-volume podName:70116092-cc29-4334-811e-6f8b5c36f3c0 nodeName:}" failed. No retries permitted until 2025-01-13 20:34:56.160439078 +0000 UTC m=+34.255481443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/70116092-cc29-4334-811e-6f8b5c36f3c0-config-volume") pod "coredns-76f75df574-tdc7v" (UID: "70116092-cc29-4334-811e-6f8b5c36f3c0") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.673845 kubelet[3072]: E0113 20:34:55.673522 3072 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.673845 kubelet[3072]: E0113 20:34:55.673561 3072 projected.go:200] Error preparing data for projected volume kube-api-access-k7wr2 for pod calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.673845 kubelet[3072]: E0113 20:34:55.673628 3072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5418ad94-7e2e-4821-8b2b-1361c7326bfb-kube-api-access-k7wr2 podName:5418ad94-7e2e-4821-8b2b-1361c7326bfb nodeName:}" failed. No retries permitted until 2025-01-13 20:34:56.173607184 +0000 UTC m=+34.268649549 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k7wr2" (UniqueName: "kubernetes.io/projected/5418ad94-7e2e-4821-8b2b-1361c7326bfb-kube-api-access-k7wr2") pod "calico-apiserver-6ccf4fbb57-gbjkq" (UID: "5418ad94-7e2e-4821-8b2b-1361c7326bfb") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.682398 kubelet[3072]: E0113 20:34:55.682032 3072 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.682398 kubelet[3072]: E0113 20:34:55.682073 3072 projected.go:200] Error preparing data for projected volume kube-api-access-8plpp for pod calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.682398 kubelet[3072]: E0113 20:34:55.682135 3072 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a12675bf-0fbc-45d7-9f8f-c29f2b87c216-kube-api-access-8plpp podName:a12675bf-0fbc-45d7-9f8f-c29f2b87c216 nodeName:}" failed. No retries permitted until 2025-01-13 20:34:56.182113336 +0000 UTC m=+34.277155701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8plpp" (UniqueName: "kubernetes.io/projected/a12675bf-0fbc-45d7-9f8f-c29f2b87c216-kube-api-access-8plpp") pod "calico-apiserver-6ccf4fbb57-j85nm" (UID: "a12675bf-0fbc-45d7-9f8f-c29f2b87c216") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:34:55.686947 systemd[1]: run-netns-cni\x2dc4f94eb9\x2dbaaa\x2dff98\x2da1bb\x2d8480be63bb8e.mount: Deactivated successfully. Jan 13 20:34:55.687109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18-shm.mount: Deactivated successfully. Jan 13 20:34:56.029562 containerd[1614]: time="2025-01-13T20:34:56.029496706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:0,}" Jan 13 20:34:56.100486 containerd[1614]: time="2025-01-13T20:34:56.100421595Z" level=error msg="Failed to destroy network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.101545 containerd[1614]: time="2025-01-13T20:34:56.101469834Z" level=error msg="encountered an error cleaning up failed sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.103316 containerd[1614]: time="2025-01-13T20:34:56.101581474Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.103618 kubelet[3072]: E0113 20:34:56.101883 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.103618 kubelet[3072]: E0113 20:34:56.101958 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:56.103618 kubelet[3072]: E0113 20:34:56.101996 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:56.103727 kubelet[3072]: E0113 20:34:56.102075 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:56.106164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8-shm.mount: Deactivated successfully. Jan 13 20:34:56.224006 kubelet[3072]: I0113 20:34:56.223972 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d" Jan 13 20:34:56.224848 containerd[1614]: time="2025-01-13T20:34:56.224756711Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:34:56.228533 containerd[1614]: time="2025-01-13T20:34:56.226418029Z" level=info msg="Ensure that sandbox 7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d in task-service has been cleanup successfully" Jan 13 20:34:56.228533 containerd[1614]: time="2025-01-13T20:34:56.227556108Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:34:56.228533 containerd[1614]: time="2025-01-13T20:34:56.227584668Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:34:56.231157 containerd[1614]: time="2025-01-13T20:34:56.230494945Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:34:56.231157 containerd[1614]: time="2025-01-13T20:34:56.230657065Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:34:56.231157 containerd[1614]: time="2025-01-13T20:34:56.230669345Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:34:56.232474 systemd[1]: run-netns-cni\x2df13950fe\x2d8f6c\x2de3c6\x2d3ab3\x2d9c2cbc1b55fb.mount: Deactivated successfully. Jan 13 20:34:56.233080 containerd[1614]: time="2025-01-13T20:34:56.232941143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:2,}" Jan 13 20:34:56.237566 kubelet[3072]: I0113 20:34:56.237408 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8" Jan 13 20:34:56.238682 containerd[1614]: time="2025-01-13T20:34:56.238434017Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:34:56.240548 containerd[1614]: time="2025-01-13T20:34:56.240055696Z" level=info msg="Ensure that sandbox 86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8 in task-service has been cleanup successfully" Jan 13 20:34:56.241107 containerd[1614]: time="2025-01-13T20:34:56.240934655Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:34:56.241107 containerd[1614]: time="2025-01-13T20:34:56.240958935Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:34:56.242512 containerd[1614]: time="2025-01-13T20:34:56.242414333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:1,}" Jan 13 20:34:56.247779 systemd[1]: run-netns-cni\x2d8eb1dab4\x2d0bed\x2dccf3\x2d4782\x2df6f4ceaac6e7.mount: Deactivated successfully. Jan 13 20:34:56.271512 containerd[1614]: time="2025-01-13T20:34:56.271463624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:56.274234 containerd[1614]: time="2025-01-13T20:34:56.272710863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:0,}" Jan 13 20:34:56.365868 containerd[1614]: time="2025-01-13T20:34:56.365746570Z" level=error msg="Failed to destroy network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.366875 containerd[1614]: time="2025-01-13T20:34:56.366804529Z" level=error msg="encountered an error cleaning up failed sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.367471 containerd[1614]: time="2025-01-13T20:34:56.367274329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.367884 kubelet[3072]: E0113 20:34:56.367860 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.368369 kubelet[3072]: E0113 20:34:56.368015 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:56.368369 kubelet[3072]: E0113 20:34:56.368057 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:56.368520 kubelet[3072]: E0113 20:34:56.368500 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:56.394411 containerd[1614]: time="2025-01-13T20:34:56.394335822Z" level=error msg="Failed to destroy network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.394946 containerd[1614]: time="2025-01-13T20:34:56.394904981Z" level=error msg="encountered an error cleaning up failed sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.395102 containerd[1614]: time="2025-01-13T20:34:56.395073741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.395738 kubelet[3072]: E0113 20:34:56.395487 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.395738 kubelet[3072]: E0113 20:34:56.395545 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:56.395738 kubelet[3072]: E0113 20:34:56.395566 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:56.395867 kubelet[3072]: E0113 20:34:56.395622 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" Jan 13 20:34:56.420756 containerd[1614]: time="2025-01-13T20:34:56.420556836Z" level=error msg="Failed to destroy network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.421570 containerd[1614]: time="2025-01-13T20:34:56.421304515Z" level=error msg="encountered an error cleaning up failed sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.421735 containerd[1614]: time="2025-01-13T20:34:56.421456155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.421889 containerd[1614]: time="2025-01-13T20:34:56.421546075Z" level=error msg="Failed to destroy network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.422528 kubelet[3072]: E0113 20:34:56.422102 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.422528 kubelet[3072]: E0113 20:34:56.422162 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:56.422528 kubelet[3072]: E0113 20:34:56.422181 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:56.422718 kubelet[3072]: E0113 20:34:56.422258 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d54ws" podUID="a019a359-d21d-4317-8d1c-bd6d76806eac" Jan 13 20:34:56.423198 containerd[1614]: time="2025-01-13T20:34:56.423084553Z" level=error msg="encountered an error cleaning up failed sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.423198 containerd[1614]: time="2025-01-13T20:34:56.423166233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.423500 kubelet[3072]: E0113 20:34:56.423466 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.423545 kubelet[3072]: E0113 20:34:56.423517 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:56.423545 kubelet[3072]: E0113 20:34:56.423539 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:56.423854 kubelet[3072]: E0113 20:34:56.423595 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tdc7v" podUID="70116092-cc29-4334-811e-6f8b5c36f3c0" Jan 13 20:34:56.572744 containerd[1614]: time="2025-01-13T20:34:56.572289724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:34:56.575092 containerd[1614]: time="2025-01-13T20:34:56.574829882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:0,}" Jan 13 20:34:56.679103 containerd[1614]: time="2025-01-13T20:34:56.678937418Z" level=error msg="Failed to destroy network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.680835 containerd[1614]: time="2025-01-13T20:34:56.680795536Z" level=error msg="encountered an error cleaning up failed sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.684295 containerd[1614]: time="2025-01-13T20:34:56.683427693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.685729 kubelet[3072]: E0113 20:34:56.684502 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.685729 kubelet[3072]: E0113 20:34:56.684563 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:56.685729 kubelet[3072]: E0113 20:34:56.684584 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:56.685835 kubelet[3072]: E0113 20:34:56.684638 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" podUID="5418ad94-7e2e-4821-8b2b-1361c7326bfb" Jan 13 20:34:56.722255 containerd[1614]: time="2025-01-13T20:34:56.721318575Z" level=error msg="Failed to destroy network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.726119 containerd[1614]: time="2025-01-13T20:34:56.723528853Z" level=error msg="encountered an error cleaning up failed sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.726119 containerd[1614]: time="2025-01-13T20:34:56.723625773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.726671 kubelet[3072]: E0113 20:34:56.726074 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:56.726671 kubelet[3072]: E0113 20:34:56.726793 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:56.724342 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166-shm.mount: Deactivated successfully. Jan 13 20:34:56.730913 kubelet[3072]: E0113 20:34:56.727556 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:56.730913 kubelet[3072]: E0113 20:34:56.728093 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" podUID="a12675bf-0fbc-45d7-9f8f-c29f2b87c216" Jan 13 20:34:57.243274 kubelet[3072]: I0113 20:34:57.242329 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166" Jan 13 20:34:57.244484 containerd[1614]: time="2025-01-13T20:34:57.243953456Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:34:57.244484 containerd[1614]: time="2025-01-13T20:34:57.244155536Z" level=info msg="Ensure that sandbox b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166 in task-service has been cleanup successfully" Jan 13 20:34:57.247518 containerd[1614]: time="2025-01-13T20:34:57.247472733Z" level=info msg="TearDown network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" successfully" Jan 13 20:34:57.247518 containerd[1614]: time="2025-01-13T20:34:57.247512573Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" returns successfully" Jan 13 20:34:57.248640 systemd[1]: run-netns-cni\x2dbec7a1b3\x2d3f27\x2d0d05\x2d50fa\x2d7b9828694ee4.mount: Deactivated successfully. Jan 13 20:34:57.252037 containerd[1614]: time="2025-01-13T20:34:57.251680529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:34:57.252721 kubelet[3072]: I0113 20:34:57.252666 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62" Jan 13 20:34:57.256734 containerd[1614]: time="2025-01-13T20:34:57.256689964Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:34:57.256943 containerd[1614]: time="2025-01-13T20:34:57.256920203Z" level=info msg="Ensure that sandbox 3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62 in task-service has been cleanup successfully" Jan 13 20:34:57.257481 containerd[1614]: time="2025-01-13T20:34:57.257435283Z" level=info msg="TearDown network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" successfully" Jan 13 20:34:57.257481 containerd[1614]: time="2025-01-13T20:34:57.257457923Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" returns successfully" Jan 13 20:34:57.260954 containerd[1614]: time="2025-01-13T20:34:57.260701000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:1,}" Jan 13 20:34:57.261865 systemd[1]: run-netns-cni\x2d453d3ddd\x2db5ad\x2dbe48\x2d2b7d\x2d699136818e7d.mount: Deactivated successfully. Jan 13 20:34:57.263991 kubelet[3072]: I0113 20:34:57.263499 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2" Jan 13 20:34:57.267393 containerd[1614]: time="2025-01-13T20:34:57.267272433Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:34:57.269428 containerd[1614]: time="2025-01-13T20:34:57.269393031Z" level=info msg="Ensure that sandbox b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2 in task-service has been cleanup successfully" Jan 13 20:34:57.271733 kubelet[3072]: I0113 20:34:57.271691 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c" Jan 13 20:34:57.271897 containerd[1614]: time="2025-01-13T20:34:57.271824029Z" level=info msg="TearDown network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" successfully" Jan 13 20:34:57.271897 containerd[1614]: time="2025-01-13T20:34:57.271852829Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" returns successfully" Jan 13 20:34:57.274918 containerd[1614]: time="2025-01-13T20:34:57.274848786Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:34:57.275126 containerd[1614]: time="2025-01-13T20:34:57.275104625Z" level=info msg="Ensure that sandbox 76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c in task-service has been cleanup successfully" Jan 13 20:34:57.275514 containerd[1614]: time="2025-01-13T20:34:57.275488865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:1,}" Jan 13 20:34:57.280717 containerd[1614]: time="2025-01-13T20:34:57.280437980Z" level=info msg="TearDown network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" successfully" Jan 13 20:34:57.280717 containerd[1614]: time="2025-01-13T20:34:57.280508260Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" returns successfully" Jan 13 20:34:57.282855 kubelet[3072]: I0113 20:34:57.282825 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9" Jan 13 20:34:57.283834 containerd[1614]: time="2025-01-13T20:34:57.283792097Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:34:57.283927 containerd[1614]: time="2025-01-13T20:34:57.283909657Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:34:57.283927 containerd[1614]: time="2025-01-13T20:34:57.283923257Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:34:57.286081 containerd[1614]: time="2025-01-13T20:34:57.286044935Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:34:57.289083 containerd[1614]: time="2025-01-13T20:34:57.289033892Z" level=info msg="Ensure that sandbox 0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9 in task-service has been cleanup successfully" Jan 13 20:34:57.290535 containerd[1614]: time="2025-01-13T20:34:57.290426930Z" level=info msg="TearDown network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" successfully" Jan 13 20:34:57.290535 containerd[1614]: time="2025-01-13T20:34:57.290462850Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" returns successfully" Jan 13 20:34:57.291479 containerd[1614]: time="2025-01-13T20:34:57.291104370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:2,}" Jan 13 20:34:57.292539 containerd[1614]: time="2025-01-13T20:34:57.292505528Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:34:57.292636 containerd[1614]: time="2025-01-13T20:34:57.292620048Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:34:57.292636 containerd[1614]: time="2025-01-13T20:34:57.292632888Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:34:57.293443 containerd[1614]: time="2025-01-13T20:34:57.293416367Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:34:57.293524 containerd[1614]: time="2025-01-13T20:34:57.293514287Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:34:57.293866 kubelet[3072]: I0113 20:34:57.293769 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91" Jan 13 20:34:57.296685 containerd[1614]: time="2025-01-13T20:34:57.296302244Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:34:57.296685 containerd[1614]: time="2025-01-13T20:34:57.295899525Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:34:57.296685 containerd[1614]: time="2025-01-13T20:34:57.296562924Z" level=info msg="Ensure that sandbox de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91 in task-service has been cleanup successfully" Jan 13 20:34:57.297786 containerd[1614]: time="2025-01-13T20:34:57.297753243Z" level=info msg="TearDown network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" successfully" Jan 13 20:34:57.299058 containerd[1614]: time="2025-01-13T20:34:57.299030802Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" returns successfully" Jan 13 20:34:57.299274 containerd[1614]: time="2025-01-13T20:34:57.298382482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:3,}" Jan 13 20:34:57.300943 containerd[1614]: time="2025-01-13T20:34:57.300858800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:1,}" Jan 13 20:34:57.413970 containerd[1614]: time="2025-01-13T20:34:57.411766370Z" level=error msg="Failed to destroy network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.415003 containerd[1614]: time="2025-01-13T20:34:57.414771967Z" level=error msg="encountered an error cleaning up failed sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.415003 containerd[1614]: time="2025-01-13T20:34:57.414854447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.415391 kubelet[3072]: E0113 20:34:57.415114 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.415391 kubelet[3072]: E0113 20:34:57.415168 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:57.415391 kubelet[3072]: E0113 20:34:57.415187 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:57.415518 kubelet[3072]: E0113 20:34:57.415260 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" podUID="a12675bf-0fbc-45d7-9f8f-c29f2b87c216" Jan 13 20:34:57.550882 containerd[1614]: time="2025-01-13T20:34:57.550716633Z" level=error msg="Failed to destroy network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.555096 containerd[1614]: time="2025-01-13T20:34:57.554944829Z" level=error msg="encountered an error cleaning up failed sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.555482 containerd[1614]: time="2025-01-13T20:34:57.555444868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.556386 kubelet[3072]: E0113 20:34:57.556355 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.556536 kubelet[3072]: E0113 20:34:57.556415 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:57.556681 kubelet[3072]: E0113 20:34:57.556600 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:57.556681 kubelet[3072]: E0113 20:34:57.556675 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" podUID="5418ad94-7e2e-4821-8b2b-1361c7326bfb" Jan 13 20:34:57.569968 containerd[1614]: time="2025-01-13T20:34:57.569822694Z" level=error msg="Failed to destroy network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.571258 containerd[1614]: time="2025-01-13T20:34:57.570903653Z" level=error msg="encountered an error cleaning up failed sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.571258 containerd[1614]: time="2025-01-13T20:34:57.570982453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.571477 kubelet[3072]: E0113 20:34:57.571442 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.571524 kubelet[3072]: E0113 20:34:57.571505 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:57.571547 kubelet[3072]: E0113 20:34:57.571525 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:57.571787 kubelet[3072]: E0113 20:34:57.571579 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:57.579183 containerd[1614]: time="2025-01-13T20:34:57.578961005Z" level=error msg="Failed to destroy network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.582799 containerd[1614]: time="2025-01-13T20:34:57.582441642Z" level=error msg="encountered an error cleaning up failed sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.583718 containerd[1614]: time="2025-01-13T20:34:57.583429441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.584094 kubelet[3072]: E0113 20:34:57.584056 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.584151 kubelet[3072]: E0113 20:34:57.584113 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:57.584151 kubelet[3072]: E0113 20:34:57.584135 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:57.584278 kubelet[3072]: E0113 20:34:57.584190 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d54ws" podUID="a019a359-d21d-4317-8d1c-bd6d76806eac" Jan 13 20:34:57.588308 containerd[1614]: time="2025-01-13T20:34:57.588263236Z" level=error msg="Failed to destroy network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.588645 containerd[1614]: time="2025-01-13T20:34:57.588616836Z" level=error msg="encountered an error cleaning up failed sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.588698 containerd[1614]: time="2025-01-13T20:34:57.588679516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.588958 kubelet[3072]: E0113 20:34:57.588935 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.589019 kubelet[3072]: E0113 20:34:57.588995 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:57.589051 kubelet[3072]: E0113 20:34:57.589020 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:57.589091 kubelet[3072]: E0113 20:34:57.589073 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tdc7v" podUID="70116092-cc29-4334-811e-6f8b5c36f3c0" Jan 13 20:34:57.593837 containerd[1614]: time="2025-01-13T20:34:57.593734991Z" level=error msg="Failed to destroy network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.594701 containerd[1614]: time="2025-01-13T20:34:57.594452310Z" level=error msg="encountered an error cleaning up failed sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.594701 containerd[1614]: time="2025-01-13T20:34:57.594533310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.595368 kubelet[3072]: E0113 20:34:57.594978 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:57.595368 kubelet[3072]: E0113 20:34:57.595028 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:57.595368 kubelet[3072]: E0113 20:34:57.595091 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:57.595504 kubelet[3072]: E0113 20:34:57.595153 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" Jan 13 20:34:57.688849 systemd[1]: run-netns-cni\x2da87ddc91\x2dbeeb\x2d7e96\x2d557f\x2dc8e8574f96c7.mount: Deactivated successfully. Jan 13 20:34:57.689267 systemd[1]: run-netns-cni\x2d1bd8ed96\x2d4bf4\x2def84\x2daa10\x2db821aad535d3.mount: Deactivated successfully. Jan 13 20:34:57.689444 systemd[1]: run-netns-cni\x2d7c2b6dea\x2d306f\x2d5a7f\x2d81f3\x2d5ccfec2553dd.mount: Deactivated successfully. Jan 13 20:34:57.689535 systemd[1]: run-netns-cni\x2d519df83f\x2d0d24\x2d0c09\x2d1b17\x2d98b6c8725f47.mount: Deactivated successfully. Jan 13 20:34:58.300558 kubelet[3072]: I0113 20:34:58.300410 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353" Jan 13 20:34:58.301565 containerd[1614]: time="2025-01-13T20:34:58.301517134Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" Jan 13 20:34:58.302861 containerd[1614]: time="2025-01-13T20:34:58.301712614Z" level=info msg="Ensure that sandbox 8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353 in task-service has been cleanup successfully" Jan 13 20:34:58.307010 containerd[1614]: time="2025-01-13T20:34:58.306888529Z" level=info msg="TearDown network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" successfully" Jan 13 20:34:58.307010 containerd[1614]: time="2025-01-13T20:34:58.306959329Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" returns successfully" Jan 13 20:34:58.307534 containerd[1614]: time="2025-01-13T20:34:58.307509768Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:34:58.308317 containerd[1614]: time="2025-01-13T20:34:58.308021408Z" level=info msg="TearDown network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" successfully" Jan 13 20:34:58.308317 containerd[1614]: time="2025-01-13T20:34:58.308050728Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" returns successfully" Jan 13 20:34:58.308225 systemd[1]: run-netns-cni\x2dd20500e9\x2d3f89\x2d0c06\x2d732f\x2d24d0f6001926.mount: Deactivated successfully. Jan 13 20:34:58.309669 kubelet[3072]: I0113 20:34:58.308793 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac" Jan 13 20:34:58.311267 containerd[1614]: time="2025-01-13T20:34:58.311134685Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:34:58.311267 containerd[1614]: time="2025-01-13T20:34:58.311199485Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" Jan 13 20:34:58.311659 containerd[1614]: time="2025-01-13T20:34:58.311429804Z" level=info msg="Ensure that sandbox a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac in task-service has been cleanup successfully" Jan 13 20:34:58.313302 containerd[1614]: time="2025-01-13T20:34:58.311448164Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:34:58.313302 containerd[1614]: time="2025-01-13T20:34:58.312916283Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:34:58.313302 containerd[1614]: time="2025-01-13T20:34:58.313228683Z" level=info msg="TearDown network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" successfully" Jan 13 20:34:58.313302 containerd[1614]: time="2025-01-13T20:34:58.313246203Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" returns successfully" Jan 13 20:34:58.315242 kubelet[3072]: I0113 20:34:58.315196 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7" Jan 13 20:34:58.315930 containerd[1614]: time="2025-01-13T20:34:58.314164562Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:34:58.316021 containerd[1614]: time="2025-01-13T20:34:58.315905400Z" level=info msg="TearDown network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" successfully" Jan 13 20:34:58.316159 containerd[1614]: time="2025-01-13T20:34:58.316083560Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" returns successfully" Jan 13 20:34:58.316159 containerd[1614]: time="2025-01-13T20:34:58.314273362Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:34:58.316406 containerd[1614]: time="2025-01-13T20:34:58.316329840Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:34:58.316567 containerd[1614]: time="2025-01-13T20:34:58.316456960Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:34:58.317644 containerd[1614]: time="2025-01-13T20:34:58.317570158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:2,}" Jan 13 20:34:58.318189 systemd[1]: run-netns-cni\x2d4f29754c\x2d8dcf\x2dd102\x2dd845\x2ddc82957b405d.mount: Deactivated successfully. Jan 13 20:34:58.319625 containerd[1614]: time="2025-01-13T20:34:58.319476277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:4,}" Jan 13 20:34:58.325296 containerd[1614]: time="2025-01-13T20:34:58.325015751Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" Jan 13 20:34:58.327231 containerd[1614]: time="2025-01-13T20:34:58.326158110Z" level=info msg="Ensure that sandbox 654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7 in task-service has been cleanup successfully" Jan 13 20:34:58.327866 containerd[1614]: time="2025-01-13T20:34:58.327740628Z" level=info msg="TearDown network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" successfully" Jan 13 20:34:58.328129 kubelet[3072]: I0113 20:34:58.328002 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117" Jan 13 20:34:58.331579 containerd[1614]: time="2025-01-13T20:34:58.331331945Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" returns successfully" Jan 13 20:34:58.333258 containerd[1614]: time="2025-01-13T20:34:58.333230343Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:34:58.333754 containerd[1614]: time="2025-01-13T20:34:58.333498383Z" level=info msg="TearDown network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" successfully" Jan 13 20:34:58.333754 containerd[1614]: time="2025-01-13T20:34:58.333517583Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" returns successfully" Jan 13 20:34:58.334083 containerd[1614]: time="2025-01-13T20:34:58.333984982Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" Jan 13 20:34:58.334417 containerd[1614]: time="2025-01-13T20:34:58.334263822Z" level=info msg="Ensure that sandbox 496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117 in task-service has been cleanup successfully" Jan 13 20:34:58.334515 containerd[1614]: time="2025-01-13T20:34:58.334498982Z" level=info msg="TearDown network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" successfully" Jan 13 20:34:58.334567 containerd[1614]: time="2025-01-13T20:34:58.334555462Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" returns successfully" Jan 13 20:34:58.336279 containerd[1614]: time="2025-01-13T20:34:58.336252020Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:34:58.337135 systemd[1]: run-netns-cni\x2d46959034\x2d8dc7\x2d57c0\x2d7a90\x2d1b6261176770.mount: Deactivated successfully. Jan 13 20:34:58.337741 containerd[1614]: time="2025-01-13T20:34:58.337596059Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:34:58.337964 containerd[1614]: time="2025-01-13T20:34:58.337837979Z" level=info msg="TearDown network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" successfully" Jan 13 20:34:58.337964 containerd[1614]: time="2025-01-13T20:34:58.337854259Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" returns successfully" Jan 13 20:34:58.339552 containerd[1614]: time="2025-01-13T20:34:58.337948938Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:34:58.339552 containerd[1614]: time="2025-01-13T20:34:58.339467697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:34:58.339955 containerd[1614]: time="2025-01-13T20:34:58.339482097Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:34:58.340176 containerd[1614]: time="2025-01-13T20:34:58.340149616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:3,}" Jan 13 20:34:58.341314 kubelet[3072]: I0113 20:34:58.341090 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c" Jan 13 20:34:58.344423 containerd[1614]: time="2025-01-13T20:34:58.344390172Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" Jan 13 20:34:58.344878 systemd[1]: run-netns-cni\x2d274fc303\x2de68d\x2d5695\x2d7609\x2d5f6745327af1.mount: Deactivated successfully. Jan 13 20:34:58.346529 containerd[1614]: time="2025-01-13T20:34:58.346375010Z" level=info msg="Ensure that sandbox 06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c in task-service has been cleanup successfully" Jan 13 20:34:58.348474 containerd[1614]: time="2025-01-13T20:34:58.348445408Z" level=info msg="TearDown network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" successfully" Jan 13 20:34:58.348654 containerd[1614]: time="2025-01-13T20:34:58.348544288Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" returns successfully" Jan 13 20:34:58.350641 containerd[1614]: time="2025-01-13T20:34:58.350523566Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:34:58.350832 kubelet[3072]: I0113 20:34:58.350745 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934" Jan 13 20:34:58.351053 containerd[1614]: time="2025-01-13T20:34:58.350891446Z" level=info msg="TearDown network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" successfully" Jan 13 20:34:58.351053 containerd[1614]: time="2025-01-13T20:34:58.350907926Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" returns successfully" Jan 13 20:34:58.352701 containerd[1614]: time="2025-01-13T20:34:58.351529645Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" Jan 13 20:34:58.352701 containerd[1614]: time="2025-01-13T20:34:58.351682045Z" level=info msg="Ensure that sandbox 56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934 in task-service has been cleanup successfully" Jan 13 20:34:58.355070 containerd[1614]: time="2025-01-13T20:34:58.353868003Z" level=info msg="TearDown network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" successfully" Jan 13 20:34:58.355070 containerd[1614]: time="2025-01-13T20:34:58.353901643Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" returns successfully" Jan 13 20:34:58.358388 containerd[1614]: time="2025-01-13T20:34:58.358182439Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:34:58.361190 containerd[1614]: time="2025-01-13T20:34:58.359708957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:2,}" Jan 13 20:34:58.361739 containerd[1614]: time="2025-01-13T20:34:58.361112196Z" level=info msg="TearDown network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" successfully" Jan 13 20:34:58.362129 containerd[1614]: time="2025-01-13T20:34:58.361980795Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" returns successfully" Jan 13 20:34:58.364913 containerd[1614]: time="2025-01-13T20:34:58.363950913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:2,}" Jan 13 20:34:58.577469 containerd[1614]: time="2025-01-13T20:34:58.577250304Z" level=error msg="Failed to destroy network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.578576 containerd[1614]: time="2025-01-13T20:34:58.577716184Z" level=error msg="encountered an error cleaning up failed sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.578576 containerd[1614]: time="2025-01-13T20:34:58.577916904Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.579036 kubelet[3072]: E0113 20:34:58.578963 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.579036 kubelet[3072]: E0113 20:34:58.579030 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:58.579133 kubelet[3072]: E0113 20:34:58.579053 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:58.579133 kubelet[3072]: E0113 20:34:58.579104 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tdc7v" podUID="70116092-cc29-4334-811e-6f8b5c36f3c0" Jan 13 20:34:58.622108 containerd[1614]: time="2025-01-13T20:34:58.622053781Z" level=error msg="Failed to destroy network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.626018 containerd[1614]: time="2025-01-13T20:34:58.624549258Z" level=error msg="encountered an error cleaning up failed sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.626550 containerd[1614]: time="2025-01-13T20:34:58.626305856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.627393 kubelet[3072]: E0113 20:34:58.627319 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.627991 kubelet[3072]: E0113 20:34:58.627400 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:58.627991 kubelet[3072]: E0113 20:34:58.627464 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:58.627991 kubelet[3072]: E0113 20:34:58.627544 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d54ws" podUID="a019a359-d21d-4317-8d1c-bd6d76806eac" Jan 13 20:34:58.649806 containerd[1614]: time="2025-01-13T20:34:58.649609154Z" level=error msg="Failed to destroy network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.651561 containerd[1614]: time="2025-01-13T20:34:58.651514152Z" level=error msg="encountered an error cleaning up failed sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.652682 containerd[1614]: time="2025-01-13T20:34:58.652582071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.654460 kubelet[3072]: E0113 20:34:58.653435 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.654460 kubelet[3072]: E0113 20:34:58.653494 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:58.654460 kubelet[3072]: E0113 20:34:58.653523 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:58.654618 kubelet[3072]: E0113 20:34:58.653582 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" Jan 13 20:34:58.659362 containerd[1614]: time="2025-01-13T20:34:58.659189184Z" level=error msg="Failed to destroy network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.659777 containerd[1614]: time="2025-01-13T20:34:58.659668864Z" level=error msg="encountered an error cleaning up failed sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.659777 containerd[1614]: time="2025-01-13T20:34:58.659730864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.661514 kubelet[3072]: E0113 20:34:58.661160 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.661514 kubelet[3072]: E0113 20:34:58.661231 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:58.661514 kubelet[3072]: E0113 20:34:58.661253 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:58.661667 kubelet[3072]: E0113 20:34:58.661316 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" podUID="a12675bf-0fbc-45d7-9f8f-c29f2b87c216" Jan 13 20:34:58.669039 containerd[1614]: time="2025-01-13T20:34:58.668981175Z" level=error msg="Failed to destroy network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.669944 containerd[1614]: time="2025-01-13T20:34:58.669808134Z" level=error msg="encountered an error cleaning up failed sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.669944 containerd[1614]: time="2025-01-13T20:34:58.669902334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.670739 kubelet[3072]: E0113 20:34:58.670517 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.670739 kubelet[3072]: E0113 20:34:58.670594 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:58.670739 kubelet[3072]: E0113 20:34:58.670631 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:58.670868 kubelet[3072]: E0113 20:34:58.670700 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" podUID="5418ad94-7e2e-4821-8b2b-1361c7326bfb" Jan 13 20:34:58.690969 systemd[1]: run-netns-cni\x2d24edd57a\x2d3d9b\x2d708f\x2db2c3\x2dba51651e411a.mount: Deactivated successfully. Jan 13 20:34:58.691136 systemd[1]: run-netns-cni\x2d28aa9c55\x2d7d27\x2d4ad7\x2d0599\x2ddae08adbd3e0.mount: Deactivated successfully. Jan 13 20:34:58.694766 containerd[1614]: time="2025-01-13T20:34:58.692419512Z" level=error msg="Failed to destroy network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.695271 containerd[1614]: time="2025-01-13T20:34:58.695146229Z" level=error msg="encountered an error cleaning up failed sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.695515 containerd[1614]: time="2025-01-13T20:34:58.695391309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.697326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d-shm.mount: Deactivated successfully. Jan 13 20:34:58.698930 kubelet[3072]: E0113 20:34:58.698899 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:58.699004 kubelet[3072]: E0113 20:34:58.698959 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:58.699004 kubelet[3072]: E0113 20:34:58.698979 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:58.699069 kubelet[3072]: E0113 20:34:58.699032 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:59.357260 kubelet[3072]: I0113 20:34:59.356653 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565" Jan 13 20:34:59.357814 containerd[1614]: time="2025-01-13T20:34:59.357376664Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\"" Jan 13 20:34:59.357814 containerd[1614]: time="2025-01-13T20:34:59.357566024Z" level=info msg="Ensure that sandbox 8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565 in task-service has been cleanup successfully" Jan 13 20:34:59.363152 containerd[1614]: time="2025-01-13T20:34:59.362143420Z" level=info msg="TearDown network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" successfully" Jan 13 20:34:59.363152 containerd[1614]: time="2025-01-13T20:34:59.362177100Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" returns successfully" Jan 13 20:34:59.363526 systemd[1]: run-netns-cni\x2d71886d5b\x2d117f\x2d30d6\x2d3472\x2da091c029df6b.mount: Deactivated successfully. Jan 13 20:34:59.366742 containerd[1614]: time="2025-01-13T20:34:59.366163096Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" Jan 13 20:34:59.366742 containerd[1614]: time="2025-01-13T20:34:59.366449056Z" level=info msg="TearDown network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" successfully" Jan 13 20:34:59.366742 containerd[1614]: time="2025-01-13T20:34:59.366464656Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" returns successfully" Jan 13 20:34:59.367417 kubelet[3072]: I0113 20:34:59.367112 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f" Jan 13 20:34:59.367916 containerd[1614]: time="2025-01-13T20:34:59.367570975Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:34:59.367916 containerd[1614]: time="2025-01-13T20:34:59.367880734Z" level=info msg="TearDown network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" successfully" Jan 13 20:34:59.367916 containerd[1614]: time="2025-01-13T20:34:59.367897854Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" returns successfully" Jan 13 20:34:59.369358 containerd[1614]: time="2025-01-13T20:34:59.369316733Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\"" Jan 13 20:34:59.369627 containerd[1614]: time="2025-01-13T20:34:59.369554493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:34:59.369845 containerd[1614]: time="2025-01-13T20:34:59.369779652Z" level=info msg="Ensure that sandbox d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f in task-service has been cleanup successfully" Jan 13 20:34:59.373069 systemd[1]: run-netns-cni\x2d4267ecb6\x2dd252\x2dc112\x2d037f\x2d59d9574c0281.mount: Deactivated successfully. Jan 13 20:34:59.375156 containerd[1614]: time="2025-01-13T20:34:59.374665608Z" level=info msg="TearDown network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" successfully" Jan 13 20:34:59.375156 containerd[1614]: time="2025-01-13T20:34:59.374694048Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" returns successfully" Jan 13 20:34:59.377620 containerd[1614]: time="2025-01-13T20:34:59.377046005Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" Jan 13 20:34:59.377620 containerd[1614]: time="2025-01-13T20:34:59.377162405Z" level=info msg="TearDown network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" successfully" Jan 13 20:34:59.377620 containerd[1614]: time="2025-01-13T20:34:59.377173125Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" returns successfully" Jan 13 20:34:59.378772 containerd[1614]: time="2025-01-13T20:34:59.378662644Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:34:59.379043 containerd[1614]: time="2025-01-13T20:34:59.378964444Z" level=info msg="TearDown network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" successfully" Jan 13 20:34:59.379043 containerd[1614]: time="2025-01-13T20:34:59.378983844Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" returns successfully" Jan 13 20:34:59.379446 kubelet[3072]: I0113 20:34:59.379411 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764" Jan 13 20:34:59.381790 containerd[1614]: time="2025-01-13T20:34:59.381728041Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\"" Jan 13 20:34:59.384948 containerd[1614]: time="2025-01-13T20:34:59.384880878Z" level=info msg="Ensure that sandbox 0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764 in task-service has been cleanup successfully" Jan 13 20:34:59.385653 containerd[1614]: time="2025-01-13T20:34:59.385498237Z" level=info msg="TearDown network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" successfully" Jan 13 20:34:59.385653 containerd[1614]: time="2025-01-13T20:34:59.385578437Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" returns successfully" Jan 13 20:34:59.389279 containerd[1614]: time="2025-01-13T20:34:59.381861881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:3,}" Jan 13 20:34:59.389643 systemd[1]: run-netns-cni\x2dda257043\x2d7415\x2db961\x2d6712\x2d6b25dddccc3d.mount: Deactivated successfully. Jan 13 20:34:59.394613 containerd[1614]: time="2025-01-13T20:34:59.394569108Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" Jan 13 20:34:59.395427 containerd[1614]: time="2025-01-13T20:34:59.395043508Z" level=info msg="TearDown network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" successfully" Jan 13 20:34:59.395893 kubelet[3072]: I0113 20:34:59.395860 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8" Jan 13 20:34:59.396231 containerd[1614]: time="2025-01-13T20:34:59.395112268Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" returns successfully" Jan 13 20:34:59.398730 containerd[1614]: time="2025-01-13T20:34:59.398386265Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:34:59.398730 containerd[1614]: time="2025-01-13T20:34:59.398501385Z" level=info msg="TearDown network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" successfully" Jan 13 20:34:59.398730 containerd[1614]: time="2025-01-13T20:34:59.398577625Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" returns successfully" Jan 13 20:34:59.398730 containerd[1614]: time="2025-01-13T20:34:59.398724464Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\"" Jan 13 20:34:59.399076 containerd[1614]: time="2025-01-13T20:34:59.399040944Z" level=info msg="Ensure that sandbox 90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8 in task-service has been cleanup successfully" Jan 13 20:34:59.399377 containerd[1614]: time="2025-01-13T20:34:59.399275064Z" level=info msg="TearDown network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" successfully" Jan 13 20:34:59.399377 containerd[1614]: time="2025-01-13T20:34:59.399312344Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" returns successfully" Jan 13 20:34:59.401131 containerd[1614]: time="2025-01-13T20:34:59.400959502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:3,}" Jan 13 20:34:59.401804 containerd[1614]: time="2025-01-13T20:34:59.401759421Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" Jan 13 20:34:59.401992 containerd[1614]: time="2025-01-13T20:34:59.401901981Z" level=info msg="TearDown network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" successfully" Jan 13 20:34:59.401992 containerd[1614]: time="2025-01-13T20:34:59.401915941Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" returns successfully" Jan 13 20:34:59.404394 containerd[1614]: time="2025-01-13T20:34:59.404354099Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:34:59.404883 containerd[1614]: time="2025-01-13T20:34:59.404592699Z" level=info msg="TearDown network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" successfully" Jan 13 20:34:59.404883 containerd[1614]: time="2025-01-13T20:34:59.404627099Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" returns successfully" Jan 13 20:34:59.406434 containerd[1614]: time="2025-01-13T20:34:59.406399137Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:34:59.406885 containerd[1614]: time="2025-01-13T20:34:59.406865857Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:34:59.406982 containerd[1614]: time="2025-01-13T20:34:59.406965216Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:34:59.408098 containerd[1614]: time="2025-01-13T20:34:59.408053735Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:34:59.408190 containerd[1614]: time="2025-01-13T20:34:59.408170775Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:34:59.408190 containerd[1614]: time="2025-01-13T20:34:59.408187055Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:34:59.408609 kubelet[3072]: I0113 20:34:59.408580 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa" Jan 13 20:34:59.413720 containerd[1614]: time="2025-01-13T20:34:59.413616890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:5,}" Jan 13 20:34:59.414509 containerd[1614]: time="2025-01-13T20:34:59.414149209Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\"" Jan 13 20:34:59.415493 containerd[1614]: time="2025-01-13T20:34:59.415446888Z" level=info msg="Ensure that sandbox e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa in task-service has been cleanup successfully" Jan 13 20:34:59.416787 containerd[1614]: time="2025-01-13T20:34:59.416757767Z" level=info msg="TearDown network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" successfully" Jan 13 20:34:59.416882 containerd[1614]: time="2025-01-13T20:34:59.416867887Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" returns successfully" Jan 13 20:34:59.418643 containerd[1614]: time="2025-01-13T20:34:59.418609965Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" Jan 13 20:34:59.419875 containerd[1614]: time="2025-01-13T20:34:59.419086805Z" level=info msg="TearDown network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" successfully" Jan 13 20:34:59.419875 containerd[1614]: time="2025-01-13T20:34:59.419106045Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" returns successfully" Jan 13 20:34:59.421200 containerd[1614]: time="2025-01-13T20:34:59.420797763Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:34:59.422968 kubelet[3072]: I0113 20:34:59.422943 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d" Jan 13 20:34:59.423739 containerd[1614]: time="2025-01-13T20:34:59.423677320Z" level=info msg="TearDown network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" successfully" Jan 13 20:34:59.423739 containerd[1614]: time="2025-01-13T20:34:59.423706280Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" returns successfully" Jan 13 20:34:59.424004 containerd[1614]: time="2025-01-13T20:34:59.423851600Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\"" Jan 13 20:34:59.426397 containerd[1614]: time="2025-01-13T20:34:59.426322838Z" level=info msg="Ensure that sandbox 3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d in task-service has been cleanup successfully" Jan 13 20:34:59.427270 containerd[1614]: time="2025-01-13T20:34:59.426577277Z" level=info msg="TearDown network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" successfully" Jan 13 20:34:59.427270 containerd[1614]: time="2025-01-13T20:34:59.426604117Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" returns successfully" Jan 13 20:34:59.434023 containerd[1614]: time="2025-01-13T20:34:59.433755110Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" Jan 13 20:34:59.434023 containerd[1614]: time="2025-01-13T20:34:59.433860350Z" level=info msg="TearDown network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" successfully" Jan 13 20:34:59.434023 containerd[1614]: time="2025-01-13T20:34:59.433872950Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" returns successfully" Jan 13 20:34:59.434023 containerd[1614]: time="2025-01-13T20:34:59.433979070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:3,}" Jan 13 20:34:59.437788 containerd[1614]: time="2025-01-13T20:34:59.437752467Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:34:59.441074 containerd[1614]: time="2025-01-13T20:34:59.441030623Z" level=info msg="TearDown network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" successfully" Jan 13 20:34:59.441074 containerd[1614]: time="2025-01-13T20:34:59.441061223Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" returns successfully" Jan 13 20:34:59.442444 containerd[1614]: time="2025-01-13T20:34:59.442411542Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:34:59.443282 containerd[1614]: time="2025-01-13T20:34:59.442818342Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:34:59.443282 containerd[1614]: time="2025-01-13T20:34:59.442839222Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:34:59.444667 containerd[1614]: time="2025-01-13T20:34:59.444518020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:4,}" Jan 13 20:34:59.633552 containerd[1614]: time="2025-01-13T20:34:59.633421557Z" level=error msg="Failed to destroy network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.634867 containerd[1614]: time="2025-01-13T20:34:59.634529796Z" level=error msg="encountered an error cleaning up failed sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.634867 containerd[1614]: time="2025-01-13T20:34:59.634602076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.635102 kubelet[3072]: E0113 20:34:59.634834 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.635102 kubelet[3072]: E0113 20:34:59.634919 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:59.635102 kubelet[3072]: E0113 20:34:59.634942 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:34:59.635864 kubelet[3072]: E0113 20:34:59.635008 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" podUID="a12675bf-0fbc-45d7-9f8f-c29f2b87c216" Jan 13 20:34:59.693216 systemd[1]: run-netns-cni\x2dd1f24585\x2d61b5\x2d0de6\x2daeff\x2dd46f145677bb.mount: Deactivated successfully. Jan 13 20:34:59.694000 systemd[1]: run-netns-cni\x2df58d2577\x2d45dd\x2d5c18\x2d4f11\x2d9488f7ee38e3.mount: Deactivated successfully. Jan 13 20:34:59.694107 systemd[1]: run-netns-cni\x2df466e0f7\x2d6e6c\x2da746\x2d8fcd\x2dbfa1e892b0d5.mount: Deactivated successfully. Jan 13 20:34:59.716318 containerd[1614]: time="2025-01-13T20:34:59.716216037Z" level=error msg="Failed to destroy network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.718883 containerd[1614]: time="2025-01-13T20:34:59.718506395Z" level=error msg="encountered an error cleaning up failed sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.720183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b-shm.mount: Deactivated successfully. Jan 13 20:34:59.721102 containerd[1614]: time="2025-01-13T20:34:59.719462074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.722121 kubelet[3072]: E0113 20:34:59.722082 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.722256 kubelet[3072]: E0113 20:34:59.722141 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:59.722256 kubelet[3072]: E0113 20:34:59.722163 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:34:59.724616 kubelet[3072]: E0113 20:34:59.722450 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d54ws" podUID="a019a359-d21d-4317-8d1c-bd6d76806eac" Jan 13 20:34:59.739056 containerd[1614]: time="2025-01-13T20:34:59.738999895Z" level=error msg="Failed to destroy network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.742608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d-shm.mount: Deactivated successfully. Jan 13 20:34:59.744587 containerd[1614]: time="2025-01-13T20:34:59.744324610Z" level=error msg="encountered an error cleaning up failed sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.744587 containerd[1614]: time="2025-01-13T20:34:59.744422289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.746972 kubelet[3072]: E0113 20:34:59.744689 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.746972 kubelet[3072]: E0113 20:34:59.744743 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:59.746972 kubelet[3072]: E0113 20:34:59.744770 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:34:59.747076 kubelet[3072]: E0113 20:34:59.744836 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" podUID="5418ad94-7e2e-4821-8b2b-1361c7326bfb" Jan 13 20:34:59.755236 containerd[1614]: time="2025-01-13T20:34:59.755177559Z" level=error msg="Failed to destroy network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.757866 containerd[1614]: time="2025-01-13T20:34:59.757821436Z" level=error msg="encountered an error cleaning up failed sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.758178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c-shm.mount: Deactivated successfully. Jan 13 20:34:59.759380 containerd[1614]: time="2025-01-13T20:34:59.759339915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.760496 kubelet[3072]: E0113 20:34:59.760468 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.760782 kubelet[3072]: E0113 20:34:59.760649 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:59.760782 kubelet[3072]: E0113 20:34:59.760689 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:34:59.760782 kubelet[3072]: E0113 20:34:59.760748 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tdc7v" podUID="70116092-cc29-4334-811e-6f8b5c36f3c0" Jan 13 20:34:59.766807 containerd[1614]: time="2025-01-13T20:34:59.766731508Z" level=error msg="Failed to destroy network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.769553 containerd[1614]: time="2025-01-13T20:34:59.769495425Z" level=error msg="encountered an error cleaning up failed sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.769734 containerd[1614]: time="2025-01-13T20:34:59.769575705Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.769841 kubelet[3072]: E0113 20:34:59.769815 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.769925 kubelet[3072]: E0113 20:34:59.769875 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:59.769925 kubelet[3072]: E0113 20:34:59.769899 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:34:59.769990 kubelet[3072]: E0113 20:34:59.769952 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:34:59.770551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9-shm.mount: Deactivated successfully. Jan 13 20:34:59.773581 containerd[1614]: time="2025-01-13T20:34:59.773386821Z" level=error msg="Failed to destroy network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.774035 containerd[1614]: time="2025-01-13T20:34:59.773882901Z" level=error msg="encountered an error cleaning up failed sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.774035 containerd[1614]: time="2025-01-13T20:34:59.773945061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.774865 kubelet[3072]: E0113 20:34:59.774508 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:34:59.774865 kubelet[3072]: E0113 20:34:59.774561 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:59.774865 kubelet[3072]: E0113 20:34:59.774583 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:34:59.775012 kubelet[3072]: E0113 20:34:59.774646 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" Jan 13 20:35:00.432924 kubelet[3072]: I0113 20:35:00.432875 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2" Jan 13 20:35:00.433993 containerd[1614]: time="2025-01-13T20:35:00.433952025Z" level=info msg="StopPodSandbox for \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\"" Jan 13 20:35:00.434413 containerd[1614]: time="2025-01-13T20:35:00.434146985Z" level=info msg="Ensure that sandbox 181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2 in task-service has been cleanup successfully" Jan 13 20:35:00.435760 containerd[1614]: time="2025-01-13T20:35:00.435709064Z" level=info msg="TearDown network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" successfully" Jan 13 20:35:00.435978 containerd[1614]: time="2025-01-13T20:35:00.435812423Z" level=info msg="StopPodSandbox for \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" returns successfully" Jan 13 20:35:00.436654 containerd[1614]: time="2025-01-13T20:35:00.436491263Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\"" Jan 13 20:35:00.436654 containerd[1614]: time="2025-01-13T20:35:00.436581663Z" level=info msg="TearDown network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" successfully" Jan 13 20:35:00.436654 containerd[1614]: time="2025-01-13T20:35:00.436592863Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" returns successfully" Jan 13 20:35:00.437433 containerd[1614]: time="2025-01-13T20:35:00.437407342Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" Jan 13 20:35:00.437742 containerd[1614]: time="2025-01-13T20:35:00.437700782Z" level=info msg="TearDown network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" successfully" Jan 13 20:35:00.437742 containerd[1614]: time="2025-01-13T20:35:00.437721062Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" returns successfully" Jan 13 20:35:00.439090 containerd[1614]: time="2025-01-13T20:35:00.438710141Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:35:00.439147 containerd[1614]: time="2025-01-13T20:35:00.439100660Z" level=info msg="TearDown network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" successfully" Jan 13 20:35:00.439147 containerd[1614]: time="2025-01-13T20:35:00.439115820Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" returns successfully" Jan 13 20:35:00.440083 containerd[1614]: time="2025-01-13T20:35:00.440051779Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:35:00.440176 containerd[1614]: time="2025-01-13T20:35:00.440158859Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:35:00.440397 containerd[1614]: time="2025-01-13T20:35:00.440175699Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:35:00.441439 containerd[1614]: time="2025-01-13T20:35:00.441403418Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:35:00.441520 containerd[1614]: time="2025-01-13T20:35:00.441504298Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:35:00.441520 containerd[1614]: time="2025-01-13T20:35:00.441514458Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:35:00.442243 containerd[1614]: time="2025-01-13T20:35:00.441955458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:6,}" Jan 13 20:35:00.442706 kubelet[3072]: I0113 20:35:00.442485 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c" Jan 13 20:35:00.443643 containerd[1614]: time="2025-01-13T20:35:00.443562296Z" level=info msg="StopPodSandbox for \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\"" Jan 13 20:35:00.443952 containerd[1614]: time="2025-01-13T20:35:00.443826216Z" level=info msg="Ensure that sandbox 8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c in task-service has been cleanup successfully" Jan 13 20:35:00.444057 containerd[1614]: time="2025-01-13T20:35:00.444037976Z" level=info msg="TearDown network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" successfully" Jan 13 20:35:00.444135 containerd[1614]: time="2025-01-13T20:35:00.444121695Z" level=info msg="StopPodSandbox for \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" returns successfully" Jan 13 20:35:00.444991 containerd[1614]: time="2025-01-13T20:35:00.444872655Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\"" Jan 13 20:35:00.444991 containerd[1614]: time="2025-01-13T20:35:00.444948135Z" level=info msg="TearDown network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" successfully" Jan 13 20:35:00.444991 containerd[1614]: time="2025-01-13T20:35:00.444958055Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" returns successfully" Jan 13 20:35:00.447915 containerd[1614]: time="2025-01-13T20:35:00.447763572Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" Jan 13 20:35:00.448368 containerd[1614]: time="2025-01-13T20:35:00.447893372Z" level=info msg="TearDown network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" successfully" Jan 13 20:35:00.448368 containerd[1614]: time="2025-01-13T20:35:00.448258452Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" returns successfully" Jan 13 20:35:00.450582 kubelet[3072]: I0113 20:35:00.449606 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147" Jan 13 20:35:00.450740 containerd[1614]: time="2025-01-13T20:35:00.449977690Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:35:00.450740 containerd[1614]: time="2025-01-13T20:35:00.450066890Z" level=info msg="TearDown network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" successfully" Jan 13 20:35:00.450740 containerd[1614]: time="2025-01-13T20:35:00.450075810Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" returns successfully" Jan 13 20:35:00.451070 containerd[1614]: time="2025-01-13T20:35:00.451043569Z" level=info msg="StopPodSandbox for \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\"" Jan 13 20:35:00.452111 containerd[1614]: time="2025-01-13T20:35:00.452084448Z" level=info msg="Ensure that sandbox 782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147 in task-service has been cleanup successfully" Jan 13 20:35:00.452737 containerd[1614]: time="2025-01-13T20:35:00.452712007Z" level=info msg="TearDown network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" successfully" Jan 13 20:35:00.452837 containerd[1614]: time="2025-01-13T20:35:00.452822687Z" level=info msg="StopPodSandbox for \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" returns successfully" Jan 13 20:35:00.452973 containerd[1614]: time="2025-01-13T20:35:00.451580088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:4,}" Jan 13 20:35:00.454891 containerd[1614]: time="2025-01-13T20:35:00.454731765Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\"" Jan 13 20:35:00.454891 containerd[1614]: time="2025-01-13T20:35:00.454854685Z" level=info msg="TearDown network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" successfully" Jan 13 20:35:00.454891 containerd[1614]: time="2025-01-13T20:35:00.454865325Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" returns successfully" Jan 13 20:35:00.457038 containerd[1614]: time="2025-01-13T20:35:00.456997203Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" Jan 13 20:35:00.457169 containerd[1614]: time="2025-01-13T20:35:00.457150203Z" level=info msg="TearDown network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" successfully" Jan 13 20:35:00.457200 containerd[1614]: time="2025-01-13T20:35:00.457167323Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" returns successfully" Jan 13 20:35:00.459805 containerd[1614]: time="2025-01-13T20:35:00.459767840Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:35:00.460123 containerd[1614]: time="2025-01-13T20:35:00.459926200Z" level=info msg="TearDown network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" successfully" Jan 13 20:35:00.460123 containerd[1614]: time="2025-01-13T20:35:00.459938080Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" returns successfully" Jan 13 20:35:00.461203 kubelet[3072]: I0113 20:35:00.460690 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d" Jan 13 20:35:00.461769 containerd[1614]: time="2025-01-13T20:35:00.461735439Z" level=info msg="StopPodSandbox for \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\"" Jan 13 20:35:00.463246 containerd[1614]: time="2025-01-13T20:35:00.462028158Z" level=info msg="Ensure that sandbox 0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d in task-service has been cleanup successfully" Jan 13 20:35:00.463246 containerd[1614]: time="2025-01-13T20:35:00.462536998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:35:00.465242 containerd[1614]: time="2025-01-13T20:35:00.463777397Z" level=info msg="TearDown network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" successfully" Jan 13 20:35:00.465242 containerd[1614]: time="2025-01-13T20:35:00.463802157Z" level=info msg="StopPodSandbox for \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" returns successfully" Jan 13 20:35:00.466249 containerd[1614]: time="2025-01-13T20:35:00.466010234Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\"" Jan 13 20:35:00.466249 containerd[1614]: time="2025-01-13T20:35:00.466103394Z" level=info msg="TearDown network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" successfully" Jan 13 20:35:00.466249 containerd[1614]: time="2025-01-13T20:35:00.466112914Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" returns successfully" Jan 13 20:35:00.467529 containerd[1614]: time="2025-01-13T20:35:00.467296153Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" Jan 13 20:35:00.467529 containerd[1614]: time="2025-01-13T20:35:00.467447833Z" level=info msg="TearDown network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" successfully" Jan 13 20:35:00.467529 containerd[1614]: time="2025-01-13T20:35:00.467459393Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" returns successfully" Jan 13 20:35:00.470832 containerd[1614]: time="2025-01-13T20:35:00.470790310Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:35:00.473543 containerd[1614]: time="2025-01-13T20:35:00.472588548Z" level=info msg="TearDown network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" successfully" Jan 13 20:35:00.473671 kubelet[3072]: I0113 20:35:00.473487 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b" Jan 13 20:35:00.473884 containerd[1614]: time="2025-01-13T20:35:00.473764547Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" returns successfully" Jan 13 20:35:00.477456 containerd[1614]: time="2025-01-13T20:35:00.476646264Z" level=info msg="StopPodSandbox for \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\"" Jan 13 20:35:00.479885 containerd[1614]: time="2025-01-13T20:35:00.479685541Z" level=info msg="Ensure that sandbox 482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b in task-service has been cleanup successfully" Jan 13 20:35:00.480200 containerd[1614]: time="2025-01-13T20:35:00.477800823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:4,}" Jan 13 20:35:00.480637 containerd[1614]: time="2025-01-13T20:35:00.480613060Z" level=info msg="TearDown network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" successfully" Jan 13 20:35:00.480910 containerd[1614]: time="2025-01-13T20:35:00.480700860Z" level=info msg="StopPodSandbox for \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" returns successfully" Jan 13 20:35:00.481718 containerd[1614]: time="2025-01-13T20:35:00.481693459Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\"" Jan 13 20:35:00.481959 containerd[1614]: time="2025-01-13T20:35:00.481937339Z" level=info msg="TearDown network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" successfully" Jan 13 20:35:00.482017 containerd[1614]: time="2025-01-13T20:35:00.482004539Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" returns successfully" Jan 13 20:35:00.485370 containerd[1614]: time="2025-01-13T20:35:00.485012136Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" Jan 13 20:35:00.485370 containerd[1614]: time="2025-01-13T20:35:00.485123896Z" level=info msg="TearDown network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" successfully" Jan 13 20:35:00.485370 containerd[1614]: time="2025-01-13T20:35:00.485134896Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" returns successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.486552095Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.486668735Z" level=info msg="TearDown network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.486679215Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" returns successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.486747295Z" level=info msg="StopPodSandbox for \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\"" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.486881654Z" level=info msg="Ensure that sandbox c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9 in task-service has been cleanup successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.487069134Z" level=info msg="TearDown network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.487083454Z" level=info msg="StopPodSandbox for \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" returns successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.487580174Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\"" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.487689014Z" level=info msg="TearDown network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.487718694Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" returns successfully" Jan 13 20:35:00.488550 containerd[1614]: time="2025-01-13T20:35:00.487692214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:4,}" Jan 13 20:35:00.490784 kubelet[3072]: I0113 20:35:00.485603 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9" Jan 13 20:35:00.490833 containerd[1614]: time="2025-01-13T20:35:00.489996891Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" Jan 13 20:35:00.490833 containerd[1614]: time="2025-01-13T20:35:00.490092051Z" level=info msg="TearDown network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" successfully" Jan 13 20:35:00.490833 containerd[1614]: time="2025-01-13T20:35:00.490100731Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" returns successfully" Jan 13 20:35:00.494495 containerd[1614]: time="2025-01-13T20:35:00.493941848Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:35:00.494833 containerd[1614]: time="2025-01-13T20:35:00.494527527Z" level=info msg="TearDown network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" successfully" Jan 13 20:35:00.494833 containerd[1614]: time="2025-01-13T20:35:00.494540007Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" returns successfully" Jan 13 20:35:00.495147 containerd[1614]: time="2025-01-13T20:35:00.495125287Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:35:00.495236 containerd[1614]: time="2025-01-13T20:35:00.495221166Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:35:00.495274 containerd[1614]: time="2025-01-13T20:35:00.495235446Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:35:00.496301 containerd[1614]: time="2025-01-13T20:35:00.495909046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:5,}" Jan 13 20:35:00.585410 containerd[1614]: time="2025-01-13T20:35:00.585319640Z" level=error msg="Failed to destroy network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.585997 containerd[1614]: time="2025-01-13T20:35:00.585827919Z" level=error msg="encountered an error cleaning up failed sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.585997 containerd[1614]: time="2025-01-13T20:35:00.585891719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.588071 kubelet[3072]: E0113 20:35:00.587518 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.588071 kubelet[3072]: E0113 20:35:00.587594 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:35:00.588071 kubelet[3072]: E0113 20:35:00.587623 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" Jan 13 20:35:00.588202 kubelet[3072]: E0113 20:35:00.587703 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-689464bd4f-rlk45_calico-system(34cc4b95-fb25-4302-8175-d6695afbf832)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podUID="34cc4b95-fb25-4302-8175-d6695afbf832" Jan 13 20:35:00.626114 containerd[1614]: time="2025-01-13T20:35:00.626066841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:00.637296 containerd[1614]: time="2025-01-13T20:35:00.637231150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 13 20:35:00.654112 containerd[1614]: time="2025-01-13T20:35:00.654050214Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:00.660422 containerd[1614]: time="2025-01-13T20:35:00.660374088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:00.666676 containerd[1614]: time="2025-01-13T20:35:00.665873043Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 5.448000238s" Jan 13 20:35:00.666676 containerd[1614]: time="2025-01-13T20:35:00.665936843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 13 20:35:00.682573 containerd[1614]: time="2025-01-13T20:35:00.681917467Z" level=info msg="CreateContainer within sandbox \"fbbb61895d5d7d64866f53b2cc6590278426d1d7b39db309de3c5b509d5b0c57\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 20:35:00.697043 systemd[1]: run-netns-cni\x2d72e3cf82\x2da505\x2d96aa\x2ddd2e\x2d9d27101688c0.mount: Deactivated successfully. Jan 13 20:35:00.697645 systemd[1]: run-netns-cni\x2dc20cc815\x2d265a\x2dff37\x2d42de\x2d65ceb23dcecd.mount: Deactivated successfully. Jan 13 20:35:00.697724 systemd[1]: run-netns-cni\x2ded17d61d\x2de44d\x2d5d9d\x2dfbfc\x2d36255b11ad46.mount: Deactivated successfully. Jan 13 20:35:00.697861 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2-shm.mount: Deactivated successfully. Jan 13 20:35:00.697952 systemd[1]: run-netns-cni\x2d7f8beb49\x2deceb\x2db417\x2d2483\x2d45922350e69f.mount: Deactivated successfully. Jan 13 20:35:00.698025 systemd[1]: run-netns-cni\x2db8f1c57f\x2d901d\x2d9a83\x2d2551\x2d28bfd8d57e64.mount: Deactivated successfully. Jan 13 20:35:00.698093 systemd[1]: run-netns-cni\x2dee5ed043\x2dd63a\x2d664d\x2d683d\x2df9de9f00f837.mount: Deactivated successfully. Jan 13 20:35:00.698164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount499629212.mount: Deactivated successfully. Jan 13 20:35:00.733266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470186493.mount: Deactivated successfully. Jan 13 20:35:00.745623 containerd[1614]: time="2025-01-13T20:35:00.745125847Z" level=info msg="CreateContainer within sandbox \"fbbb61895d5d7d64866f53b2cc6590278426d1d7b39db309de3c5b509d5b0c57\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a89064e5c237afda131e396d6f2a366efc2fefbe986a74d131e26ba89d5ac4b\"" Jan 13 20:35:00.748127 containerd[1614]: time="2025-01-13T20:35:00.747999444Z" level=info msg="StartContainer for \"4a89064e5c237afda131e396d6f2a366efc2fefbe986a74d131e26ba89d5ac4b\"" Jan 13 20:35:00.879064 containerd[1614]: time="2025-01-13T20:35:00.879009198Z" level=error msg="Failed to destroy network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.879607 containerd[1614]: time="2025-01-13T20:35:00.879571558Z" level=error msg="encountered an error cleaning up failed sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.879752 containerd[1614]: time="2025-01-13T20:35:00.879729077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.880297 kubelet[3072]: E0113 20:35:00.880276 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.880585 kubelet[3072]: E0113 20:35:00.880571 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:35:00.880672 kubelet[3072]: E0113 20:35:00.880663 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-tdc7v" Jan 13 20:35:00.880968 kubelet[3072]: E0113 20:35:00.880877 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tdc7v_kube-system(70116092-cc29-4334-811e-6f8b5c36f3c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-tdc7v" podUID="70116092-cc29-4334-811e-6f8b5c36f3c0" Jan 13 20:35:00.907259 containerd[1614]: time="2025-01-13T20:35:00.907104451Z" level=error msg="Failed to destroy network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.907698 containerd[1614]: time="2025-01-13T20:35:00.907667171Z" level=error msg="encountered an error cleaning up failed sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.907822 containerd[1614]: time="2025-01-13T20:35:00.907800970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.908952 kubelet[3072]: E0113 20:35:00.908594 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.908952 kubelet[3072]: E0113 20:35:00.908656 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:35:00.908952 kubelet[3072]: E0113 20:35:00.908681 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" Jan 13 20:35:00.909085 kubelet[3072]: E0113 20:35:00.908730 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-gbjkq_calico-apiserver(5418ad94-7e2e-4821-8b2b-1361c7326bfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" podUID="5418ad94-7e2e-4821-8b2b-1361c7326bfb" Jan 13 20:35:00.923192 containerd[1614]: time="2025-01-13T20:35:00.923107516Z" level=error msg="Failed to destroy network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.929354 containerd[1614]: time="2025-01-13T20:35:00.928954830Z" level=error msg="encountered an error cleaning up failed sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.929597 containerd[1614]: time="2025-01-13T20:35:00.929500670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.929952 containerd[1614]: time="2025-01-13T20:35:00.929029750Z" level=info msg="StartContainer for \"4a89064e5c237afda131e396d6f2a366efc2fefbe986a74d131e26ba89d5ac4b\" returns successfully" Jan 13 20:35:00.930542 kubelet[3072]: E0113 20:35:00.930306 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.930542 kubelet[3072]: E0113 20:35:00.930409 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:35:00.930542 kubelet[3072]: E0113 20:35:00.930432 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5hc2j" Jan 13 20:35:00.930712 kubelet[3072]: E0113 20:35:00.930492 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5hc2j_calico-system(d79efc93-b14e-4d5a-8c70-0155fb5a684a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5hc2j" podUID="d79efc93-b14e-4d5a-8c70-0155fb5a684a" Jan 13 20:35:00.963021 containerd[1614]: time="2025-01-13T20:35:00.962816438Z" level=error msg="Failed to destroy network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.963671 containerd[1614]: time="2025-01-13T20:35:00.963500117Z" level=error msg="encountered an error cleaning up failed sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.963671 containerd[1614]: time="2025-01-13T20:35:00.963569837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.966695 kubelet[3072]: E0113 20:35:00.965444 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.966695 kubelet[3072]: E0113 20:35:00.965505 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:35:00.966695 kubelet[3072]: E0113 20:35:00.965525 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d54ws" Jan 13 20:35:00.966876 kubelet[3072]: E0113 20:35:00.965581 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d54ws_kube-system(a019a359-d21d-4317-8d1c-bd6d76806eac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d54ws" podUID="a019a359-d21d-4317-8d1c-bd6d76806eac" Jan 13 20:35:00.967226 containerd[1614]: time="2025-01-13T20:35:00.967059514Z" level=error msg="Failed to destroy network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.969687 containerd[1614]: time="2025-01-13T20:35:00.969630671Z" level=error msg="encountered an error cleaning up failed sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.969941 containerd[1614]: time="2025-01-13T20:35:00.969831111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.970373 kubelet[3072]: E0113 20:35:00.970181 3072 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 20:35:00.970623 kubelet[3072]: E0113 20:35:00.970592 3072 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:35:00.970769 kubelet[3072]: E0113 20:35:00.970686 3072 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" Jan 13 20:35:00.970871 kubelet[3072]: E0113 20:35:00.970843 3072 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6ccf4fbb57-j85nm_calico-apiserver(a12675bf-0fbc-45d7-9f8f-c29f2b87c216)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" podUID="a12675bf-0fbc-45d7-9f8f-c29f2b87c216" Jan 13 20:35:01.038754 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 20:35:01.039246 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 20:35:01.492540 kubelet[3072]: I0113 20:35:01.491785 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35" Jan 13 20:35:01.492930 containerd[1614]: time="2025-01-13T20:35:01.492408334Z" level=info msg="StopPodSandbox for \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\"" Jan 13 20:35:01.494247 containerd[1614]: time="2025-01-13T20:35:01.493984532Z" level=info msg="Ensure that sandbox 99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35 in task-service has been cleanup successfully" Jan 13 20:35:01.494412 containerd[1614]: time="2025-01-13T20:35:01.494372172Z" level=info msg="TearDown network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\" successfully" Jan 13 20:35:01.494412 containerd[1614]: time="2025-01-13T20:35:01.494392692Z" level=info msg="StopPodSandbox for \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\" returns successfully" Jan 13 20:35:01.496347 containerd[1614]: time="2025-01-13T20:35:01.494925371Z" level=info msg="StopPodSandbox for \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\"" Jan 13 20:35:01.496347 containerd[1614]: time="2025-01-13T20:35:01.495048571Z" level=info msg="TearDown network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" successfully" Jan 13 20:35:01.496347 containerd[1614]: time="2025-01-13T20:35:01.495059731Z" level=info msg="StopPodSandbox for \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" returns successfully" Jan 13 20:35:01.496514 containerd[1614]: time="2025-01-13T20:35:01.496484490Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\"" Jan 13 20:35:01.497897 containerd[1614]: time="2025-01-13T20:35:01.496582090Z" level=info msg="TearDown network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" successfully" Jan 13 20:35:01.497897 containerd[1614]: time="2025-01-13T20:35:01.497687569Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" returns successfully" Jan 13 20:35:01.498110 containerd[1614]: time="2025-01-13T20:35:01.498011368Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" Jan 13 20:35:01.498110 containerd[1614]: time="2025-01-13T20:35:01.498088968Z" level=info msg="TearDown network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" successfully" Jan 13 20:35:01.498110 containerd[1614]: time="2025-01-13T20:35:01.498098848Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" returns successfully" Jan 13 20:35:01.499995 containerd[1614]: time="2025-01-13T20:35:01.499906407Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:35:01.500482 containerd[1614]: time="2025-01-13T20:35:01.500415526Z" level=info msg="TearDown network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" successfully" Jan 13 20:35:01.500482 containerd[1614]: time="2025-01-13T20:35:01.500436806Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" returns successfully" Jan 13 20:35:01.501452 kubelet[3072]: I0113 20:35:01.501387 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367" Jan 13 20:35:01.502993 containerd[1614]: time="2025-01-13T20:35:01.502706244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:5,}" Jan 13 20:35:01.505133 containerd[1614]: time="2025-01-13T20:35:01.505092922Z" level=info msg="StopPodSandbox for \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\"" Jan 13 20:35:01.505763 containerd[1614]: time="2025-01-13T20:35:01.505736441Z" level=info msg="Ensure that sandbox bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367 in task-service has been cleanup successfully" Jan 13 20:35:01.508922 containerd[1614]: time="2025-01-13T20:35:01.508747158Z" level=info msg="TearDown network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\" successfully" Jan 13 20:35:01.508922 containerd[1614]: time="2025-01-13T20:35:01.508786078Z" level=info msg="StopPodSandbox for \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\" returns successfully" Jan 13 20:35:01.509411 containerd[1614]: time="2025-01-13T20:35:01.509301558Z" level=info msg="StopPodSandbox for \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\"" Jan 13 20:35:01.509460 containerd[1614]: time="2025-01-13T20:35:01.509443517Z" level=info msg="TearDown network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" successfully" Jan 13 20:35:01.509605 containerd[1614]: time="2025-01-13T20:35:01.509460157Z" level=info msg="StopPodSandbox for \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" returns successfully" Jan 13 20:35:01.513079 containerd[1614]: time="2025-01-13T20:35:01.512890194Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\"" Jan 13 20:35:01.513079 containerd[1614]: time="2025-01-13T20:35:01.513068594Z" level=info msg="TearDown network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" successfully" Jan 13 20:35:01.513254 containerd[1614]: time="2025-01-13T20:35:01.513097794Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" returns successfully" Jan 13 20:35:01.514386 kubelet[3072]: I0113 20:35:01.514081 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9" Jan 13 20:35:01.517305 containerd[1614]: time="2025-01-13T20:35:01.517240710Z" level=info msg="StopPodSandbox for \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\"" Jan 13 20:35:01.521430 containerd[1614]: time="2025-01-13T20:35:01.521380026Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" Jan 13 20:35:01.521820 containerd[1614]: time="2025-01-13T20:35:01.521738786Z" level=info msg="TearDown network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" successfully" Jan 13 20:35:01.521820 containerd[1614]: time="2025-01-13T20:35:01.521761106Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" returns successfully" Jan 13 20:35:01.523796 containerd[1614]: time="2025-01-13T20:35:01.523741664Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:35:01.523898 containerd[1614]: time="2025-01-13T20:35:01.523854904Z" level=info msg="TearDown network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" successfully" Jan 13 20:35:01.523898 containerd[1614]: time="2025-01-13T20:35:01.523865624Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" returns successfully" Jan 13 20:35:01.530952 containerd[1614]: time="2025-01-13T20:35:01.530786937Z" level=info msg="Ensure that sandbox bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9 in task-service has been cleanup successfully" Jan 13 20:35:01.531832 containerd[1614]: time="2025-01-13T20:35:01.531803336Z" level=info msg="TearDown network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\" successfully" Jan 13 20:35:01.532026 containerd[1614]: time="2025-01-13T20:35:01.531939296Z" level=info msg="StopPodSandbox for \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\" returns successfully" Jan 13 20:35:01.532650 containerd[1614]: time="2025-01-13T20:35:01.530811537Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:35:01.532650 containerd[1614]: time="2025-01-13T20:35:01.532341696Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:35:01.532650 containerd[1614]: time="2025-01-13T20:35:01.532355736Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:35:01.533199 containerd[1614]: time="2025-01-13T20:35:01.533135615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:6,}" Jan 13 20:35:01.533637 containerd[1614]: time="2025-01-13T20:35:01.533603454Z" level=info msg="StopPodSandbox for \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\"" Jan 13 20:35:01.534127 containerd[1614]: time="2025-01-13T20:35:01.533787014Z" level=info msg="TearDown network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" successfully" Jan 13 20:35:01.534127 containerd[1614]: time="2025-01-13T20:35:01.533806374Z" level=info msg="StopPodSandbox for \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" returns successfully" Jan 13 20:35:01.534640 containerd[1614]: time="2025-01-13T20:35:01.534610134Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\"" Jan 13 20:35:01.535468 containerd[1614]: time="2025-01-13T20:35:01.534804773Z" level=info msg="TearDown network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" successfully" Jan 13 20:35:01.535468 containerd[1614]: time="2025-01-13T20:35:01.534824053Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" returns successfully" Jan 13 20:35:01.535585 kubelet[3072]: I0113 20:35:01.535104 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a" Jan 13 20:35:01.537846 containerd[1614]: time="2025-01-13T20:35:01.537649051Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" Jan 13 20:35:01.538803 containerd[1614]: time="2025-01-13T20:35:01.538340770Z" level=info msg="TearDown network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" successfully" Jan 13 20:35:01.539245 containerd[1614]: time="2025-01-13T20:35:01.539013409Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" returns successfully" Jan 13 20:35:01.540845 containerd[1614]: time="2025-01-13T20:35:01.540813288Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:35:01.541366 containerd[1614]: time="2025-01-13T20:35:01.540999887Z" level=info msg="TearDown network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" successfully" Jan 13 20:35:01.541366 containerd[1614]: time="2025-01-13T20:35:01.541014767Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" returns successfully" Jan 13 20:35:01.541366 containerd[1614]: time="2025-01-13T20:35:01.541094687Z" level=info msg="StopPodSandbox for \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\"" Jan 13 20:35:01.541608 containerd[1614]: time="2025-01-13T20:35:01.541582367Z" level=info msg="Ensure that sandbox b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a in task-service has been cleanup successfully" Jan 13 20:35:01.541856 containerd[1614]: time="2025-01-13T20:35:01.541838527Z" level=info msg="TearDown network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\" successfully" Jan 13 20:35:01.542555 containerd[1614]: time="2025-01-13T20:35:01.541915047Z" level=info msg="StopPodSandbox for \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\" returns successfully" Jan 13 20:35:01.544754 containerd[1614]: time="2025-01-13T20:35:01.544388724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:35:01.544754 containerd[1614]: time="2025-01-13T20:35:01.544719164Z" level=info msg="StopPodSandbox for \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\"" Jan 13 20:35:01.544886 containerd[1614]: time="2025-01-13T20:35:01.544842324Z" level=info msg="TearDown network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" successfully" Jan 13 20:35:01.544886 containerd[1614]: time="2025-01-13T20:35:01.544871044Z" level=info msg="StopPodSandbox for \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" returns successfully" Jan 13 20:35:01.556815 containerd[1614]: time="2025-01-13T20:35:01.556759992Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\"" Jan 13 20:35:01.556927 containerd[1614]: time="2025-01-13T20:35:01.556866232Z" level=info msg="TearDown network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" successfully" Jan 13 20:35:01.556927 containerd[1614]: time="2025-01-13T20:35:01.556876912Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" returns successfully" Jan 13 20:35:01.557506 containerd[1614]: time="2025-01-13T20:35:01.557479232Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" Jan 13 20:35:01.557688 containerd[1614]: time="2025-01-13T20:35:01.557556672Z" level=info msg="TearDown network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" successfully" Jan 13 20:35:01.557688 containerd[1614]: time="2025-01-13T20:35:01.557569392Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" returns successfully" Jan 13 20:35:01.558093 containerd[1614]: time="2025-01-13T20:35:01.558060431Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:35:01.558228 containerd[1614]: time="2025-01-13T20:35:01.558155311Z" level=info msg="TearDown network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" successfully" Jan 13 20:35:01.558228 containerd[1614]: time="2025-01-13T20:35:01.558170911Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" returns successfully" Jan 13 20:35:01.562220 containerd[1614]: time="2025-01-13T20:35:01.562135347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:5,}" Jan 13 20:35:01.565164 kubelet[3072]: I0113 20:35:01.565055 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32" Jan 13 20:35:01.566396 containerd[1614]: time="2025-01-13T20:35:01.566299903Z" level=info msg="StopPodSandbox for \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\"" Jan 13 20:35:01.566775 containerd[1614]: time="2025-01-13T20:35:01.566716263Z" level=info msg="Ensure that sandbox d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32 in task-service has been cleanup successfully" Jan 13 20:35:01.568393 containerd[1614]: time="2025-01-13T20:35:01.568310941Z" level=info msg="TearDown network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\" successfully" Jan 13 20:35:01.568393 containerd[1614]: time="2025-01-13T20:35:01.568385621Z" level=info msg="StopPodSandbox for \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\" returns successfully" Jan 13 20:35:01.571800 containerd[1614]: time="2025-01-13T20:35:01.571736938Z" level=info msg="StopPodSandbox for \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\"" Jan 13 20:35:01.571996 containerd[1614]: time="2025-01-13T20:35:01.571868138Z" level=info msg="TearDown network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" successfully" Jan 13 20:35:01.571996 containerd[1614]: time="2025-01-13T20:35:01.571880858Z" level=info msg="StopPodSandbox for \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" returns successfully" Jan 13 20:35:01.573246 containerd[1614]: time="2025-01-13T20:35:01.573150857Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\"" Jan 13 20:35:01.573738 containerd[1614]: time="2025-01-13T20:35:01.573669576Z" level=info msg="TearDown network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" successfully" Jan 13 20:35:01.573738 containerd[1614]: time="2025-01-13T20:35:01.573692776Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" returns successfully" Jan 13 20:35:01.574959 containerd[1614]: time="2025-01-13T20:35:01.574795255Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" Jan 13 20:35:01.574959 containerd[1614]: time="2025-01-13T20:35:01.574898015Z" level=info msg="TearDown network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" successfully" Jan 13 20:35:01.574959 containerd[1614]: time="2025-01-13T20:35:01.574908095Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" returns successfully" Jan 13 20:35:01.576760 containerd[1614]: time="2025-01-13T20:35:01.576009374Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:35:01.577518 containerd[1614]: time="2025-01-13T20:35:01.577482773Z" level=info msg="TearDown network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" successfully" Jan 13 20:35:01.577701 containerd[1614]: time="2025-01-13T20:35:01.577680653Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" returns successfully" Jan 13 20:35:01.580236 containerd[1614]: time="2025-01-13T20:35:01.580183090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:5,}" Jan 13 20:35:01.603305 kubelet[3072]: I0113 20:35:01.603274 3072 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328" Jan 13 20:35:01.605195 containerd[1614]: time="2025-01-13T20:35:01.605094106Z" level=info msg="StopPodSandbox for \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\"" Jan 13 20:35:01.606841 containerd[1614]: time="2025-01-13T20:35:01.606596225Z" level=info msg="Ensure that sandbox 132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328 in task-service has been cleanup successfully" Jan 13 20:35:01.608997 kubelet[3072]: I0113 20:35:01.608952 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mq9wk" podStartSLOduration=2.411390507 podStartE2EDuration="18.608910743s" podCreationTimestamp="2025-01-13 20:34:43 +0000 UTC" firstStartedPulling="2025-01-13 20:34:44.469401166 +0000 UTC m=+22.564443531" lastFinishedPulling="2025-01-13 20:35:00.666921402 +0000 UTC m=+38.761963767" observedRunningTime="2025-01-13 20:35:01.608546023 +0000 UTC m=+39.703588388" watchObservedRunningTime="2025-01-13 20:35:01.608910743 +0000 UTC m=+39.703953108" Jan 13 20:35:01.615382 containerd[1614]: time="2025-01-13T20:35:01.614923337Z" level=info msg="TearDown network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\" successfully" Jan 13 20:35:01.615382 containerd[1614]: time="2025-01-13T20:35:01.615240177Z" level=info msg="StopPodSandbox for \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\" returns successfully" Jan 13 20:35:01.618441 containerd[1614]: time="2025-01-13T20:35:01.618398494Z" level=info msg="StopPodSandbox for \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\"" Jan 13 20:35:01.620750 containerd[1614]: time="2025-01-13T20:35:01.620621252Z" level=info msg="TearDown network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" successfully" Jan 13 20:35:01.620750 containerd[1614]: time="2025-01-13T20:35:01.620648692Z" level=info msg="StopPodSandbox for \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" returns successfully" Jan 13 20:35:01.622026 containerd[1614]: time="2025-01-13T20:35:01.621794251Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\"" Jan 13 20:35:01.622026 containerd[1614]: time="2025-01-13T20:35:01.621941770Z" level=info msg="TearDown network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" successfully" Jan 13 20:35:01.622026 containerd[1614]: time="2025-01-13T20:35:01.621954250Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" returns successfully" Jan 13 20:35:01.622810 containerd[1614]: time="2025-01-13T20:35:01.622636730Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" Jan 13 20:35:01.623076 containerd[1614]: time="2025-01-13T20:35:01.622916050Z" level=info msg="TearDown network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" successfully" Jan 13 20:35:01.623076 containerd[1614]: time="2025-01-13T20:35:01.622935890Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" returns successfully" Jan 13 20:35:01.623492 containerd[1614]: time="2025-01-13T20:35:01.623463449Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:35:01.623492 containerd[1614]: time="2025-01-13T20:35:01.623548089Z" level=info msg="TearDown network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" successfully" Jan 13 20:35:01.623492 containerd[1614]: time="2025-01-13T20:35:01.623559689Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" returns successfully" Jan 13 20:35:01.624614 containerd[1614]: time="2025-01-13T20:35:01.624304648Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:35:01.625694 containerd[1614]: time="2025-01-13T20:35:01.625660167Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:35:01.625694 containerd[1614]: time="2025-01-13T20:35:01.625684367Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:35:01.630790 containerd[1614]: time="2025-01-13T20:35:01.630746642Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:35:01.631337 containerd[1614]: time="2025-01-13T20:35:01.630854402Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:35:01.631337 containerd[1614]: time="2025-01-13T20:35:01.630864802Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:35:01.632068 containerd[1614]: time="2025-01-13T20:35:01.631944081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:7,}" Jan 13 20:35:01.693705 systemd[1]: run-netns-cni\x2de508cd38\x2d7672\x2d1e8a\x2d958c\x2da9b786ae56b3.mount: Deactivated successfully. Jan 13 20:35:01.693857 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367-shm.mount: Deactivated successfully. Jan 13 20:35:01.693936 systemd[1]: run-netns-cni\x2ddd6f8857\x2df32c\x2d2542\x2d7447\x2d65268bf05378.mount: Deactivated successfully. Jan 13 20:35:01.694004 systemd[1]: run-netns-cni\x2d0a5452bf\x2d31a5\x2d2703\x2d9391\x2d836982c464d5.mount: Deactivated successfully. Jan 13 20:35:01.694071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32-shm.mount: Deactivated successfully. Jan 13 20:35:01.694153 systemd[1]: run-netns-cni\x2dad2bf6c2\x2d8ad3\x2d266d\x2d6901\x2da987fda29bef.mount: Deactivated successfully. Jan 13 20:35:01.694248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9-shm.mount: Deactivated successfully. Jan 13 20:35:01.694789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a-shm.mount: Deactivated successfully. Jan 13 20:35:01.694912 systemd[1]: run-netns-cni\x2d1176494c\x2d326c\x2d2a94\x2d3b77\x2d4542a4ac1cc9.mount: Deactivated successfully. Jan 13 20:35:01.694989 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35-shm.mount: Deactivated successfully. Jan 13 20:35:01.695064 systemd[1]: run-netns-cni\x2d378f2636\x2d12d1\x2db35e\x2dd39e\x2dffbf8d6e8a74.mount: Deactivated successfully. Jan 13 20:35:02.041591 systemd-networkd[1238]: cali3698d8b0290: Link UP Jan 13 20:35:02.042587 systemd-networkd[1238]: cali3698d8b0290: Gained carrier Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.575 [INFO][4892] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.637 [INFO][4892] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0 coredns-76f75df574- kube-system 70116092-cc29-4334-811e-6f8b5c36f3c0 683 0 2025-01-13 20:34:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152-2-0-6-5d4da4afb6 coredns-76f75df574-tdc7v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3698d8b0290 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.637 [INFO][4892] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.898 [INFO][4954] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" HandleID="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.927 [INFO][4954] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" HandleID="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003019a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152-2-0-6-5d4da4afb6", "pod":"coredns-76f75df574-tdc7v", "timestamp":"2025-01-13 20:35:01.898599187 +0000 UTC"}, Hostname:"ci-4152-2-0-6-5d4da4afb6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.927 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.927 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.927 [INFO][4954] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-0-6-5d4da4afb6' Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.931 [INFO][4954] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.942 [INFO][4954] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.957 [INFO][4954] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.967 [INFO][4954] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.977 [INFO][4954] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.977 [INFO][4954] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:01.988 [INFO][4954] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:02.004 [INFO][4954] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:02.022 [INFO][4954] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.129/26] block=192.168.92.128/26 handle="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:02.022 [INFO][4954] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.129/26] handle="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:02.022 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:02.098535 containerd[1614]: 2025-01-13 20:35:02.022 [INFO][4954] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.129/26] IPv6=[] ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" HandleID="k8s-pod-network.649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" Jan 13 20:35:02.099853 containerd[1614]: 2025-01-13 20:35:02.030 [INFO][4892] cni-plugin/k8s.go 386: Populated endpoint ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"70116092-cc29-4334-811e-6f8b5c36f3c0", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"", Pod:"coredns-76f75df574-tdc7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3698d8b0290", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.099853 containerd[1614]: 2025-01-13 20:35:02.031 [INFO][4892] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.129/32] ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" Jan 13 20:35:02.099853 containerd[1614]: 2025-01-13 20:35:02.031 [INFO][4892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3698d8b0290 ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" Jan 13 20:35:02.099853 containerd[1614]: 2025-01-13 20:35:02.044 [INFO][4892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" Jan 13 20:35:02.099853 containerd[1614]: 2025-01-13 20:35:02.054 [INFO][4892] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"70116092-cc29-4334-811e-6f8b5c36f3c0", ResourceVersion:"683", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c", Pod:"coredns-76f75df574-tdc7v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3698d8b0290", MAC:"d2:90:1e:2b:4e:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.099853 containerd[1614]: 2025-01-13 20:35:02.090 [INFO][4892] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c" Namespace="kube-system" Pod="coredns-76f75df574-tdc7v" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--tdc7v-eth0" Jan 13 20:35:02.137990 systemd-networkd[1238]: cali32919f45c30: Link UP Jan 13 20:35:02.138693 systemd-networkd[1238]: cali32919f45c30: Gained carrier Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:01.669 [INFO][4910] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:01.750 [INFO][4910] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0 csi-node-driver- calico-system d79efc93-b14e-4d5a-8c70-0155fb5a684a 559 0 2025-01-13 20:34:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4152-2-0-6-5d4da4afb6 csi-node-driver-5hc2j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali32919f45c30 [] []}} ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:01.750 [INFO][4910] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:01.955 [INFO][4977] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" HandleID="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:01.980 [INFO][4977] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" HandleID="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000294630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152-2-0-6-5d4da4afb6", "pod":"csi-node-driver-5hc2j", "timestamp":"2025-01-13 20:35:01.955627853 +0000 UTC"}, Hostname:"ci-4152-2-0-6-5d4da4afb6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:01.981 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.023 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.023 [INFO][4977] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-0-6-5d4da4afb6' Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.027 [INFO][4977] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.054 [INFO][4977] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.076 [INFO][4977] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.082 [INFO][4977] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.093 [INFO][4977] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.094 [INFO][4977] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.098 [INFO][4977] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.109 [INFO][4977] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.127 [INFO][4977] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.130/26] block=192.168.92.128/26 handle="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.127 [INFO][4977] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.130/26] handle="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.127 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:02.162869 containerd[1614]: 2025-01-13 20:35:02.127 [INFO][4977] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.130/26] IPv6=[] ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" HandleID="k8s-pod-network.baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" Jan 13 20:35:02.163698 containerd[1614]: 2025-01-13 20:35:02.131 [INFO][4910] cni-plugin/k8s.go 386: Populated endpoint ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d79efc93-b14e-4d5a-8c70-0155fb5a684a", ResourceVersion:"559", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"", Pod:"csi-node-driver-5hc2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32919f45c30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.163698 containerd[1614]: 2025-01-13 20:35:02.132 [INFO][4910] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.130/32] ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" Jan 13 20:35:02.163698 containerd[1614]: 2025-01-13 20:35:02.132 [INFO][4910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32919f45c30 ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" Jan 13 20:35:02.163698 containerd[1614]: 2025-01-13 20:35:02.136 [INFO][4910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" Jan 13 20:35:02.163698 containerd[1614]: 2025-01-13 20:35:02.138 [INFO][4910] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d79efc93-b14e-4d5a-8c70-0155fb5a684a", ResourceVersion:"559", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d", Pod:"csi-node-driver-5hc2j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32919f45c30", MAC:"de:10:dc:12:3b:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.163698 containerd[1614]: 2025-01-13 20:35:02.158 [INFO][4910] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d" Namespace="calico-system" Pod="csi-node-driver-5hc2j" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-csi--node--driver--5hc2j-eth0" Jan 13 20:35:02.167721 containerd[1614]: time="2025-01-13T20:35:02.166755934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:02.167721 containerd[1614]: time="2025-01-13T20:35:02.166831814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:02.167721 containerd[1614]: time="2025-01-13T20:35:02.166847414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.167721 containerd[1614]: time="2025-01-13T20:35:02.166956414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.230124 containerd[1614]: time="2025-01-13T20:35:02.229520995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:02.230554 containerd[1614]: time="2025-01-13T20:35:02.230349514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:02.230554 containerd[1614]: time="2025-01-13T20:35:02.230374194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.230554 containerd[1614]: time="2025-01-13T20:35:02.230493314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.244542 systemd-networkd[1238]: caliba9fa90dd07: Link UP Jan 13 20:35:02.244786 systemd-networkd[1238]: caliba9fa90dd07: Gained carrier Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:01.780 [INFO][4960] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:01.820 [INFO][4960] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0 calico-kube-controllers-689464bd4f- calico-system 34cc4b95-fb25-4302-8175-d6695afbf832 679 0 2025-01-13 20:34:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:689464bd4f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4152-2-0-6-5d4da4afb6 calico-kube-controllers-689464bd4f-rlk45 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliba9fa90dd07 [] []}} ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:01.821 [INFO][4960] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:01.989 [INFO][4992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" HandleID="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.013 [INFO][4992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" HandleID="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d63c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4152-2-0-6-5d4da4afb6", "pod":"calico-kube-controllers-689464bd4f-rlk45", "timestamp":"2025-01-13 20:35:01.989473181 +0000 UTC"}, Hostname:"ci-4152-2-0-6-5d4da4afb6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.013 [INFO][4992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.127 [INFO][4992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.128 [INFO][4992] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-0-6-5d4da4afb6' Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.135 [INFO][4992] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.149 [INFO][4992] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.164 [INFO][4992] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.169 [INFO][4992] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.179 [INFO][4992] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.179 [INFO][4992] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.184 [INFO][4992] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.195 [INFO][4992] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.208 [INFO][4992] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.131/26] block=192.168.92.128/26 handle="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.208 [INFO][4992] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.131/26] handle="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.211 [INFO][4992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:02.273076 containerd[1614]: 2025-01-13 20:35:02.212 [INFO][4992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.131/26] IPv6=[] ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" HandleID="k8s-pod-network.69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" Jan 13 20:35:02.275463 containerd[1614]: 2025-01-13 20:35:02.228 [INFO][4960] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0", GenerateName:"calico-kube-controllers-689464bd4f-", Namespace:"calico-system", SelfLink:"", UID:"34cc4b95-fb25-4302-8175-d6695afbf832", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"689464bd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"", Pod:"calico-kube-controllers-689464bd4f-rlk45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba9fa90dd07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.275463 containerd[1614]: 2025-01-13 20:35:02.229 [INFO][4960] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.131/32] ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" Jan 13 20:35:02.275463 containerd[1614]: 2025-01-13 20:35:02.229 [INFO][4960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba9fa90dd07 ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" Jan 13 20:35:02.275463 containerd[1614]: 2025-01-13 20:35:02.243 [INFO][4960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" Jan 13 20:35:02.275463 containerd[1614]: 2025-01-13 20:35:02.244 [INFO][4960] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0", GenerateName:"calico-kube-controllers-689464bd4f-", Namespace:"calico-system", SelfLink:"", UID:"34cc4b95-fb25-4302-8175-d6695afbf832", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"689464bd4f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce", Pod:"calico-kube-controllers-689464bd4f-rlk45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliba9fa90dd07", MAC:"c2:40:12:60:42:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.275463 containerd[1614]: 2025-01-13 20:35:02.258 [INFO][4960] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce" Namespace="calico-system" Pod="calico-kube-controllers-689464bd4f-rlk45" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--kube--controllers--689464bd4f--rlk45-eth0" Jan 13 20:35:02.284822 containerd[1614]: time="2025-01-13T20:35:02.284647063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tdc7v,Uid:70116092-cc29-4334-811e-6f8b5c36f3c0,Namespace:kube-system,Attempt:5,} returns sandbox id \"649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c\"" Jan 13 20:35:02.292310 containerd[1614]: time="2025-01-13T20:35:02.291997136Z" level=info msg="CreateContainer within sandbox \"649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:35:02.344715 systemd-networkd[1238]: cali29d0984b9ec: Link UP Jan 13 20:35:02.345453 systemd-networkd[1238]: cali29d0984b9ec: Gained carrier Jan 13 20:35:02.390580 containerd[1614]: time="2025-01-13T20:35:02.376258576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:02.390580 containerd[1614]: time="2025-01-13T20:35:02.376407896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:02.390580 containerd[1614]: time="2025-01-13T20:35:02.376425256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.390580 containerd[1614]: time="2025-01-13T20:35:02.377460655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.396978 containerd[1614]: time="2025-01-13T20:35:02.396929357Z" level=info msg="CreateContainer within sandbox \"649945ffc519cb563aa25cdf4b63c0f645da9c2caa02a3a37765ba2475a8887c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"27eaeeeb24f84329c24dd4e758125c722786003de7ecf5e5b44bce040b5a16ef\"" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:01.788 [INFO][4924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:01.835 [INFO][4924] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0 calico-apiserver-6ccf4fbb57- calico-apiserver a12675bf-0fbc-45d7-9f8f-c29f2b87c216 681 0 2025-01-13 20:34:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ccf4fbb57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152-2-0-6-5d4da4afb6 calico-apiserver-6ccf4fbb57-j85nm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29d0984b9ec [] []}} ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:01.839 [INFO][4924] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.016 [INFO][4996] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" HandleID="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.078 [INFO][4996] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" HandleID="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000408130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152-2-0-6-5d4da4afb6", "pod":"calico-apiserver-6ccf4fbb57-j85nm", "timestamp":"2025-01-13 20:35:02.016452835 +0000 UTC"}, Hostname:"ci-4152-2-0-6-5d4da4afb6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.079 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.211 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.212 [INFO][4996] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-0-6-5d4da4afb6' Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.216 [INFO][4996] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.233 [INFO][4996] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.271 [INFO][4996] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.276 [INFO][4996] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.282 [INFO][4996] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.283 [INFO][4996] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.287 [INFO][4996] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.299 [INFO][4996] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.315 [INFO][4996] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.132/26] block=192.168.92.128/26 handle="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.316 [INFO][4996] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.132/26] handle="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.317 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:02.398078 containerd[1614]: 2025-01-13 20:35:02.318 [INFO][4996] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.132/26] IPv6=[] ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" HandleID="k8s-pod-network.b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" Jan 13 20:35:02.399105 containerd[1614]: 2025-01-13 20:35:02.337 [INFO][4924] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0", GenerateName:"calico-apiserver-6ccf4fbb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"a12675bf-0fbc-45d7-9f8f-c29f2b87c216", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf4fbb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"", Pod:"calico-apiserver-6ccf4fbb57-j85nm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d0984b9ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.399105 containerd[1614]: 2025-01-13 20:35:02.342 [INFO][4924] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.132/32] ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" Jan 13 20:35:02.399105 containerd[1614]: 2025-01-13 20:35:02.342 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29d0984b9ec ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" Jan 13 20:35:02.399105 containerd[1614]: 2025-01-13 20:35:02.345 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" Jan 13 20:35:02.399105 containerd[1614]: 2025-01-13 20:35:02.349 [INFO][4924] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0", GenerateName:"calico-apiserver-6ccf4fbb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"a12675bf-0fbc-45d7-9f8f-c29f2b87c216", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf4fbb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc", Pod:"calico-apiserver-6ccf4fbb57-j85nm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d0984b9ec", MAC:"7a:6f:87:03:b4:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.399105 containerd[1614]: 2025-01-13 20:35:02.383 [INFO][4924] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-j85nm" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--j85nm-eth0" Jan 13 20:35:02.403938 containerd[1614]: time="2025-01-13T20:35:02.403682870Z" level=info msg="StartContainer for \"27eaeeeb24f84329c24dd4e758125c722786003de7ecf5e5b44bce040b5a16ef\"" Jan 13 20:35:02.427158 containerd[1614]: time="2025-01-13T20:35:02.426560209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5hc2j,Uid:d79efc93-b14e-4d5a-8c70-0155fb5a684a,Namespace:calico-system,Attempt:6,} returns sandbox id \"baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d\"" Jan 13 20:35:02.438525 containerd[1614]: time="2025-01-13T20:35:02.437009159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 13 20:35:02.457566 systemd-networkd[1238]: calia4ff4847904: Link UP Jan 13 20:35:02.464912 systemd-networkd[1238]: calia4ff4847904: Gained carrier Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:01.846 [INFO][4926] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:01.894 [INFO][4926] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0 calico-apiserver-6ccf4fbb57- calico-apiserver 5418ad94-7e2e-4821-8b2b-1361c7326bfb 680 0 2025-01-13 20:34:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6ccf4fbb57 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4152-2-0-6-5d4da4afb6 calico-apiserver-6ccf4fbb57-gbjkq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia4ff4847904 [] []}} ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:01.895 [INFO][4926] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.076 [INFO][5007] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" HandleID="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.103 [INFO][5007] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" HandleID="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a2430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4152-2-0-6-5d4da4afb6", "pod":"calico-apiserver-6ccf4fbb57-gbjkq", "timestamp":"2025-01-13 20:35:02.07578462 +0000 UTC"}, Hostname:"ci-4152-2-0-6-5d4da4afb6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.104 [INFO][5007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.317 [INFO][5007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.317 [INFO][5007] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-0-6-5d4da4afb6' Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.321 [INFO][5007] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.348 [INFO][5007] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.386 [INFO][5007] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.396 [INFO][5007] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.405 [INFO][5007] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.405 [INFO][5007] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.410 [INFO][5007] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.421 [INFO][5007] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.440 [INFO][5007] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.133/26] block=192.168.92.128/26 handle="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.441 [INFO][5007] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.133/26] handle="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.441 [INFO][5007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:02.515192 containerd[1614]: 2025-01-13 20:35:02.441 [INFO][5007] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.133/26] IPv6=[] ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" HandleID="k8s-pod-network.0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" Jan 13 20:35:02.518752 containerd[1614]: 2025-01-13 20:35:02.448 [INFO][4926] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0", GenerateName:"calico-apiserver-6ccf4fbb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"5418ad94-7e2e-4821-8b2b-1361c7326bfb", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf4fbb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"", Pod:"calico-apiserver-6ccf4fbb57-gbjkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4ff4847904", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.518752 containerd[1614]: 2025-01-13 20:35:02.450 [INFO][4926] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.133/32] ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" Jan 13 20:35:02.518752 containerd[1614]: 2025-01-13 20:35:02.450 [INFO][4926] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4ff4847904 ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" Jan 13 20:35:02.518752 containerd[1614]: 2025-01-13 20:35:02.465 [INFO][4926] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" Jan 13 20:35:02.518752 containerd[1614]: 2025-01-13 20:35:02.474 [INFO][4926] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0", GenerateName:"calico-apiserver-6ccf4fbb57-", Namespace:"calico-apiserver", SelfLink:"", UID:"5418ad94-7e2e-4821-8b2b-1361c7326bfb", ResourceVersion:"680", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6ccf4fbb57", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc", Pod:"calico-apiserver-6ccf4fbb57-gbjkq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia4ff4847904", MAC:"46:d0:4f:8c:38:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.518752 containerd[1614]: 2025-01-13 20:35:02.507 [INFO][4926] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc" Namespace="calico-apiserver" Pod="calico-apiserver-6ccf4fbb57-gbjkq" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-calico--apiserver--6ccf4fbb57--gbjkq-eth0" Jan 13 20:35:02.582569 systemd-networkd[1238]: cali0fd0e94991e: Link UP Jan 13 20:35:02.583771 systemd-networkd[1238]: cali0fd0e94991e: Gained carrier Jan 13 20:35:02.627945 containerd[1614]: time="2025-01-13T20:35:02.626555860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:02.627945 containerd[1614]: time="2025-01-13T20:35:02.626619980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:02.627945 containerd[1614]: time="2025-01-13T20:35:02.626631820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.632762 containerd[1614]: time="2025-01-13T20:35:02.631885695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:01.915 [INFO][4979] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:01.961 [INFO][4979] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0 coredns-76f75df574- kube-system a019a359-d21d-4317-8d1c-bd6d76806eac 684 0 2025-01-13 20:34:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4152-2-0-6-5d4da4afb6 coredns-76f75df574-d54ws eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0fd0e94991e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:01.961 [INFO][4979] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.108 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" HandleID="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.132 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" HandleID="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000422120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4152-2-0-6-5d4da4afb6", "pod":"coredns-76f75df574-d54ws", "timestamp":"2025-01-13 20:35:02.108001949 +0000 UTC"}, Hostname:"ci-4152-2-0-6-5d4da4afb6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.132 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.441 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.442 [INFO][5014] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4152-2-0-6-5d4da4afb6' Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.447 [INFO][5014] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.466 [INFO][5014] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.498 [INFO][5014] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.503 [INFO][5014] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.512 [INFO][5014] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.521 [INFO][5014] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.526 [INFO][5014] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6 Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.536 [INFO][5014] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.548 [INFO][5014] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.134/26] block=192.168.92.128/26 handle="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.548 [INFO][5014] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.134/26] handle="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" host="ci-4152-2-0-6-5d4da4afb6" Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.548 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 13 20:35:02.657365 containerd[1614]: 2025-01-13 20:35:02.550 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.134/26] IPv6=[] ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" HandleID="k8s-pod-network.01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Workload="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" Jan 13 20:35:02.659104 containerd[1614]: 2025-01-13 20:35:02.564 [INFO][4979] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a019a359-d21d-4317-8d1c-bd6d76806eac", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"", Pod:"coredns-76f75df574-d54ws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fd0e94991e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.659104 containerd[1614]: 2025-01-13 20:35:02.565 [INFO][4979] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.134/32] ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" Jan 13 20:35:02.659104 containerd[1614]: 2025-01-13 20:35:02.565 [INFO][4979] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0fd0e94991e ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" Jan 13 20:35:02.659104 containerd[1614]: 2025-01-13 20:35:02.585 [INFO][4979] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" Jan 13 20:35:02.659104 containerd[1614]: 2025-01-13 20:35:02.586 [INFO][4979] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a019a359-d21d-4317-8d1c-bd6d76806eac", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4152-2-0-6-5d4da4afb6", ContainerID:"01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6", Pod:"coredns-76f75df574-d54ws", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0fd0e94991e", MAC:"d6:58:af:46:ad:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 13 20:35:02.659104 containerd[1614]: 2025-01-13 20:35:02.606 [INFO][4979] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6" Namespace="kube-system" Pod="coredns-76f75df574-d54ws" WorkloadEndpoint="ci--4152--2--0--6--5d4da4afb6-k8s-coredns--76f75df574--d54ws-eth0" Jan 13 20:35:02.902228 containerd[1614]: time="2025-01-13T20:35:02.888426254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:02.902228 containerd[1614]: time="2025-01-13T20:35:02.888857733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:02.902228 containerd[1614]: time="2025-01-13T20:35:02.890791851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.902228 containerd[1614]: time="2025-01-13T20:35:02.897372005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:02.963452 containerd[1614]: time="2025-01-13T20:35:02.963395943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-689464bd4f-rlk45,Uid:34cc4b95-fb25-4302-8175-d6695afbf832,Namespace:calico-system,Attempt:7,} returns sandbox id \"69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce\"" Jan 13 20:35:03.012522 containerd[1614]: time="2025-01-13T20:35:03.010592699Z" level=info msg="StartContainer for \"27eaeeeb24f84329c24dd4e758125c722786003de7ecf5e5b44bce040b5a16ef\" returns successfully" Jan 13 20:35:03.060418 containerd[1614]: time="2025-01-13T20:35:03.059692213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-j85nm,Uid:a12675bf-0fbc-45d7-9f8f-c29f2b87c216,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc\"" Jan 13 20:35:03.144421 containerd[1614]: time="2025-01-13T20:35:03.143796254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:35:03.144421 containerd[1614]: time="2025-01-13T20:35:03.143983214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:35:03.144421 containerd[1614]: time="2025-01-13T20:35:03.143999574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:03.144421 containerd[1614]: time="2025-01-13T20:35:03.144107894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:35:03.155748 containerd[1614]: time="2025-01-13T20:35:03.155101924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6ccf4fbb57-gbjkq,Uid:5418ad94-7e2e-4821-8b2b-1361c7326bfb,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc\"" Jan 13 20:35:03.220856 containerd[1614]: time="2025-01-13T20:35:03.220807582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d54ws,Uid:a019a359-d21d-4317-8d1c-bd6d76806eac,Namespace:kube-system,Attempt:5,} returns sandbox id \"01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6\"" Jan 13 20:35:03.230614 containerd[1614]: time="2025-01-13T20:35:03.230577133Z" level=info msg="CreateContainer within sandbox \"01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 20:35:03.251743 containerd[1614]: time="2025-01-13T20:35:03.251592513Z" level=info msg="CreateContainer within sandbox \"01d3616484fa08598045142678e316c8290e9e8c0508a4dc08655c5932463bb6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"111fc7c881c1c58e264bfb35f4119bfb6a4522af86bab9b713a9249d78696a7b\"" Jan 13 20:35:03.252872 containerd[1614]: time="2025-01-13T20:35:03.252816152Z" level=info msg="StartContainer for \"111fc7c881c1c58e264bfb35f4119bfb6a4522af86bab9b713a9249d78696a7b\"" Jan 13 20:35:03.314971 containerd[1614]: time="2025-01-13T20:35:03.314932334Z" level=info msg="StartContainer for \"111fc7c881c1c58e264bfb35f4119bfb6a4522af86bab9b713a9249d78696a7b\" returns successfully" Jan 13 20:35:03.457635 systemd-networkd[1238]: cali3698d8b0290: Gained IPv6LL Jan 13 20:35:03.839575 kubelet[3072]: I0113 20:35:03.839416 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tdc7v" podStartSLOduration=27.839131965 podStartE2EDuration="27.839131965s" podCreationTimestamp="2025-01-13 20:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:03.838254365 +0000 UTC m=+41.933296730" watchObservedRunningTime="2025-01-13 20:35:03.839131965 +0000 UTC m=+41.934174330" Jan 13 20:35:03.842054 systemd-networkd[1238]: cali0fd0e94991e: Gained IPv6LL Jan 13 20:35:03.844650 kubelet[3072]: I0113 20:35:03.842198 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-d54ws" podStartSLOduration=27.841984282 podStartE2EDuration="27.841984282s" podCreationTimestamp="2025-01-13 20:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:35:03.81173163 +0000 UTC m=+41.906773995" watchObservedRunningTime="2025-01-13 20:35:03.841984282 +0000 UTC m=+41.937026647" Jan 13 20:35:04.097484 systemd-networkd[1238]: cali29d0984b9ec: Gained IPv6LL Jan 13 20:35:04.100723 systemd-networkd[1238]: cali32919f45c30: Gained IPv6LL Jan 13 20:35:04.162379 systemd-networkd[1238]: caliba9fa90dd07: Gained IPv6LL Jan 13 20:35:04.418430 systemd-networkd[1238]: calia4ff4847904: Gained IPv6LL Jan 13 20:35:04.437194 containerd[1614]: time="2025-01-13T20:35:04.436482090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:04.438834 containerd[1614]: time="2025-01-13T20:35:04.438786488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 13 20:35:04.439794 containerd[1614]: time="2025-01-13T20:35:04.439760807Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:04.442861 containerd[1614]: time="2025-01-13T20:35:04.442827244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:04.444597 containerd[1614]: time="2025-01-13T20:35:04.444513283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.007457044s" Jan 13 20:35:04.444597 containerd[1614]: time="2025-01-13T20:35:04.444561403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 13 20:35:04.445861 containerd[1614]: time="2025-01-13T20:35:04.445397082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 13 20:35:04.447875 containerd[1614]: time="2025-01-13T20:35:04.447841320Z" level=info msg="CreateContainer within sandbox \"baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 13 20:35:04.466376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015600748.mount: Deactivated successfully. Jan 13 20:35:04.472287 containerd[1614]: time="2025-01-13T20:35:04.472154497Z" level=info msg="CreateContainer within sandbox \"baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b184fec2502f9b624dc913cc961493c8d2d4a89a2e6a7960d453054f394c2ae3\"" Jan 13 20:35:04.473016 containerd[1614]: time="2025-01-13T20:35:04.472696297Z" level=info msg="StartContainer for \"b184fec2502f9b624dc913cc961493c8d2d4a89a2e6a7960d453054f394c2ae3\"" Jan 13 20:35:04.548525 containerd[1614]: time="2025-01-13T20:35:04.548477586Z" level=info msg="StartContainer for \"b184fec2502f9b624dc913cc961493c8d2d4a89a2e6a7960d453054f394c2ae3\" returns successfully" Jan 13 20:35:05.569592 systemd-resolved[1485]: Under memory pressure, flushing caches. Jan 13 20:35:05.569695 systemd-resolved[1485]: Flushed all caches. Jan 13 20:35:05.571769 systemd-journald[1158]: Under memory pressure, flushing caches. Jan 13 20:35:06.908514 containerd[1614]: time="2025-01-13T20:35:06.908428502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:06.911369 containerd[1614]: time="2025-01-13T20:35:06.911154100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 13 20:35:06.913390 containerd[1614]: time="2025-01-13T20:35:06.913344898Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:06.916703 containerd[1614]: time="2025-01-13T20:35:06.916447735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:06.917512 containerd[1614]: time="2025-01-13T20:35:06.917470574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.472035332s" Jan 13 20:35:06.917512 containerd[1614]: time="2025-01-13T20:35:06.917503854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 13 20:35:06.920736 containerd[1614]: time="2025-01-13T20:35:06.918587253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:35:06.945022 containerd[1614]: time="2025-01-13T20:35:06.944958229Z" level=info msg="CreateContainer within sandbox \"69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 13 20:35:06.966464 containerd[1614]: time="2025-01-13T20:35:06.966413730Z" level=info msg="CreateContainer within sandbox \"69fd8e5af406ac9621ce2bd93f9621c3657a0ac0157e624c47bedca347db55ce\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a2a7b0f23f29b437c94167180eb77add033d87abac3335db0b11dd3590f54691\"" Jan 13 20:35:06.969282 containerd[1614]: time="2025-01-13T20:35:06.968659888Z" level=info msg="StartContainer for \"a2a7b0f23f29b437c94167180eb77add033d87abac3335db0b11dd3590f54691\"" Jan 13 20:35:07.035950 containerd[1614]: time="2025-01-13T20:35:07.035895787Z" level=info msg="StartContainer for \"a2a7b0f23f29b437c94167180eb77add033d87abac3335db0b11dd3590f54691\" returns successfully" Jan 13 20:35:07.841759 kubelet[3072]: I0113 20:35:07.841642 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-689464bd4f-rlk45" podStartSLOduration=20.99221009 podStartE2EDuration="24.841569379s" podCreationTimestamp="2025-01-13 20:34:43 +0000 UTC" firstStartedPulling="2025-01-13 20:35:03.068450285 +0000 UTC m=+41.163492650" lastFinishedPulling="2025-01-13 20:35:06.917809574 +0000 UTC m=+45.012851939" observedRunningTime="2025-01-13 20:35:07.84023058 +0000 UTC m=+45.935272945" watchObservedRunningTime="2025-01-13 20:35:07.841569379 +0000 UTC m=+45.936611784" Jan 13 20:35:09.769285 containerd[1614]: time="2025-01-13T20:35:09.769141135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.770950 containerd[1614]: time="2025-01-13T20:35:09.770698694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 13 20:35:09.772441 containerd[1614]: time="2025-01-13T20:35:09.772394972Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.776630 containerd[1614]: time="2025-01-13T20:35:09.776555248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:09.778065 containerd[1614]: time="2025-01-13T20:35:09.777909647Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.859283354s" Jan 13 20:35:09.778065 containerd[1614]: time="2025-01-13T20:35:09.777950127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 20:35:09.779757 containerd[1614]: time="2025-01-13T20:35:09.779515326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 13 20:35:09.781722 containerd[1614]: time="2025-01-13T20:35:09.781498124Z" level=info msg="CreateContainer within sandbox \"b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:35:09.799432 containerd[1614]: time="2025-01-13T20:35:09.799365668Z" level=info msg="CreateContainer within sandbox \"b05471207534757c23edae7a4d02d50ae872123a558f61dfc9264cf10bb1ecbc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d119c90196f5216827620aa187b259eef80392012482556c3a38cdadc5d4bffc\"" Jan 13 20:35:09.800333 containerd[1614]: time="2025-01-13T20:35:09.800152027Z" level=info msg="StartContainer for \"d119c90196f5216827620aa187b259eef80392012482556c3a38cdadc5d4bffc\"" Jan 13 20:35:09.839482 systemd[1]: run-containerd-runc-k8s.io-d119c90196f5216827620aa187b259eef80392012482556c3a38cdadc5d4bffc-runc.X5L8Gq.mount: Deactivated successfully. Jan 13 20:35:09.882945 containerd[1614]: time="2025-01-13T20:35:09.882885554Z" level=info msg="StartContainer for \"d119c90196f5216827620aa187b259eef80392012482556c3a38cdadc5d4bffc\" returns successfully" Jan 13 20:35:10.161033 containerd[1614]: time="2025-01-13T20:35:10.160910668Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:10.162165 containerd[1614]: time="2025-01-13T20:35:10.161672147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 13 20:35:10.164033 containerd[1614]: time="2025-01-13T20:35:10.163998705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 384.443819ms" Jan 13 20:35:10.164187 containerd[1614]: time="2025-01-13T20:35:10.164039025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 13 20:35:10.166893 containerd[1614]: time="2025-01-13T20:35:10.166652742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 13 20:35:10.168477 containerd[1614]: time="2025-01-13T20:35:10.168428141Z" level=info msg="CreateContainer within sandbox \"0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 13 20:35:10.197788 containerd[1614]: time="2025-01-13T20:35:10.197586715Z" level=info msg="CreateContainer within sandbox \"0a1303250efb5e1360804a76230b0d8695a00b410531d3830773add1219c5bcc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7fb73c12068ba6395c4358a1cd77c7c4f5ae9bcbb190bf600dc432b389a8767b\"" Jan 13 20:35:10.199303 containerd[1614]: time="2025-01-13T20:35:10.198332115Z" level=info msg="StartContainer for \"7fb73c12068ba6395c4358a1cd77c7c4f5ae9bcbb190bf600dc432b389a8767b\"" Jan 13 20:35:10.274396 containerd[1614]: time="2025-01-13T20:35:10.274343967Z" level=info msg="StartContainer for \"7fb73c12068ba6395c4358a1cd77c7c4f5ae9bcbb190bf600dc432b389a8767b\" returns successfully" Jan 13 20:35:10.867276 kubelet[3072]: I0113 20:35:10.867217 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-j85nm" podStartSLOduration=20.198120683 podStartE2EDuration="26.867154644s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:35:03.109421886 +0000 UTC m=+41.204464251" lastFinishedPulling="2025-01-13 20:35:09.778455847 +0000 UTC m=+47.873498212" observedRunningTime="2025-01-13 20:35:10.865581125 +0000 UTC m=+48.960623530" watchObservedRunningTime="2025-01-13 20:35:10.867154644 +0000 UTC m=+48.962197009" Jan 13 20:35:11.230743 kubelet[3072]: I0113 20:35:11.229765 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:11.259240 kubelet[3072]: I0113 20:35:11.258495 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6ccf4fbb57-gbjkq" podStartSLOduration=20.253804155 podStartE2EDuration="27.25845034s" podCreationTimestamp="2025-01-13 20:34:44 +0000 UTC" firstStartedPulling="2025-01-13 20:35:03.159883519 +0000 UTC m=+41.254925844" lastFinishedPulling="2025-01-13 20:35:10.164529664 +0000 UTC m=+48.259572029" observedRunningTime="2025-01-13 20:35:10.890961503 +0000 UTC m=+48.986003868" watchObservedRunningTime="2025-01-13 20:35:11.25845034 +0000 UTC m=+49.353492705" Jan 13 20:35:11.867349 kubelet[3072]: I0113 20:35:11.865639 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:12.112264 containerd[1614]: time="2025-01-13T20:35:12.111867673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:12.113647 containerd[1614]: time="2025-01-13T20:35:12.113597831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 13 20:35:12.114917 containerd[1614]: time="2025-01-13T20:35:12.114729750Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:12.121548 containerd[1614]: time="2025-01-13T20:35:12.119835066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:35:12.124752 containerd[1614]: time="2025-01-13T20:35:12.124113662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.95741884s" Jan 13 20:35:12.124752 containerd[1614]: time="2025-01-13T20:35:12.125300581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 13 20:35:12.129160 containerd[1614]: time="2025-01-13T20:35:12.129126378Z" level=info msg="CreateContainer within sandbox \"baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 13 20:35:12.156003 containerd[1614]: time="2025-01-13T20:35:12.155595675Z" level=info msg="CreateContainer within sandbox \"baff59f64594b9dfa9b8cc12f636384adb99b51f62555e65735dee8db6916b9d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"84376bb3f97cfe6cd8726a441738d43ae35c8e69acab4e455521ad0b16ee5d7b\"" Jan 13 20:35:12.157369 containerd[1614]: time="2025-01-13T20:35:12.156374554Z" level=info msg="StartContainer for \"84376bb3f97cfe6cd8726a441738d43ae35c8e69acab4e455521ad0b16ee5d7b\"" Jan 13 20:35:12.288344 containerd[1614]: time="2025-01-13T20:35:12.288150359Z" level=info msg="StartContainer for \"84376bb3f97cfe6cd8726a441738d43ae35c8e69acab4e455521ad0b16ee5d7b\" returns successfully" Jan 13 20:35:12.503421 kernel: bpftool[6000]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 13 20:35:12.707015 systemd-networkd[1238]: vxlan.calico: Link UP Jan 13 20:35:12.707026 systemd-networkd[1238]: vxlan.calico: Gained carrier Jan 13 20:35:12.899733 kubelet[3072]: I0113 20:35:12.899122 3072 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-5hc2j" podStartSLOduration=20.209350647 podStartE2EDuration="29.899074868s" podCreationTimestamp="2025-01-13 20:34:43 +0000 UTC" firstStartedPulling="2025-01-13 20:35:02.43595256 +0000 UTC m=+40.530994925" lastFinishedPulling="2025-01-13 20:35:12.125676781 +0000 UTC m=+50.220719146" observedRunningTime="2025-01-13 20:35:12.899071668 +0000 UTC m=+50.994114033" watchObservedRunningTime="2025-01-13 20:35:12.899074868 +0000 UTC m=+50.994117273" Jan 13 20:35:13.206850 kubelet[3072]: I0113 20:35:13.206569 3072 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 13 20:35:13.212589 kubelet[3072]: I0113 20:35:13.212520 3072 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 13 20:35:14.273587 systemd-networkd[1238]: vxlan.calico: Gained IPv6LL Jan 13 20:35:22.002749 containerd[1614]: time="2025-01-13T20:35:22.002683236Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:35:22.003457 containerd[1614]: time="2025-01-13T20:35:22.003335115Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:35:22.003457 containerd[1614]: time="2025-01-13T20:35:22.003359235Z" level=info msg="StopPodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:35:22.006488 containerd[1614]: time="2025-01-13T20:35:22.006447913Z" level=info msg="RemovePodSandbox for \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:35:22.006606 containerd[1614]: time="2025-01-13T20:35:22.006501673Z" level=info msg="Forcibly stopping sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\"" Jan 13 20:35:22.006606 containerd[1614]: time="2025-01-13T20:35:22.006587193Z" level=info msg="TearDown network for sandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" successfully" Jan 13 20:35:22.012259 containerd[1614]: time="2025-01-13T20:35:22.012167948Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.012514 containerd[1614]: time="2025-01-13T20:35:22.012288788Z" level=info msg="RemovePodSandbox \"86da9e91098dd5d78462224a360940ba1264c05ee200185669480bfd89b6f7b8\" returns successfully" Jan 13 20:35:22.012970 containerd[1614]: time="2025-01-13T20:35:22.012932987Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:35:22.013055 containerd[1614]: time="2025-01-13T20:35:22.013040187Z" level=info msg="TearDown network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" successfully" Jan 13 20:35:22.013093 containerd[1614]: time="2025-01-13T20:35:22.013053947Z" level=info msg="StopPodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" returns successfully" Jan 13 20:35:22.014821 containerd[1614]: time="2025-01-13T20:35:22.013410587Z" level=info msg="RemovePodSandbox for \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:35:22.014821 containerd[1614]: time="2025-01-13T20:35:22.013439827Z" level=info msg="Forcibly stopping sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\"" Jan 13 20:35:22.014821 containerd[1614]: time="2025-01-13T20:35:22.013515147Z" level=info msg="TearDown network for sandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" successfully" Jan 13 20:35:22.016795 containerd[1614]: time="2025-01-13T20:35:22.016752384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.016974 containerd[1614]: time="2025-01-13T20:35:22.016956464Z" level=info msg="RemovePodSandbox \"76c706b273aaf46643ee415875c2898f441dd62acc24015680e1c964ff91ae1c\" returns successfully" Jan 13 20:35:22.017846 containerd[1614]: time="2025-01-13T20:35:22.017814463Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" Jan 13 20:35:22.018050 containerd[1614]: time="2025-01-13T20:35:22.018030383Z" level=info msg="TearDown network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" successfully" Jan 13 20:35:22.018050 containerd[1614]: time="2025-01-13T20:35:22.018050623Z" level=info msg="StopPodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" returns successfully" Jan 13 20:35:22.018436 containerd[1614]: time="2025-01-13T20:35:22.018413543Z" level=info msg="RemovePodSandbox for \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" Jan 13 20:35:22.018499 containerd[1614]: time="2025-01-13T20:35:22.018446823Z" level=info msg="Forcibly stopping sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\"" Jan 13 20:35:22.018527 containerd[1614]: time="2025-01-13T20:35:22.018514863Z" level=info msg="TearDown network for sandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" successfully" Jan 13 20:35:22.022440 containerd[1614]: time="2025-01-13T20:35:22.022369700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.022499 containerd[1614]: time="2025-01-13T20:35:22.022471980Z" level=info msg="RemovePodSandbox \"654ed9bb9e2ddd50dacd8c9685d1d0083e6044a2be2df47be2224ab6087a82e7\" returns successfully" Jan 13 20:35:22.023592 containerd[1614]: time="2025-01-13T20:35:22.023341299Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\"" Jan 13 20:35:22.023592 containerd[1614]: time="2025-01-13T20:35:22.023473059Z" level=info msg="TearDown network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" successfully" Jan 13 20:35:22.023592 containerd[1614]: time="2025-01-13T20:35:22.023484939Z" level=info msg="StopPodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" returns successfully" Jan 13 20:35:22.024051 containerd[1614]: time="2025-01-13T20:35:22.023983458Z" level=info msg="RemovePodSandbox for \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\"" Jan 13 20:35:22.024051 containerd[1614]: time="2025-01-13T20:35:22.024013698Z" level=info msg="Forcibly stopping sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\"" Jan 13 20:35:22.024111 containerd[1614]: time="2025-01-13T20:35:22.024083098Z" level=info msg="TearDown network for sandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" successfully" Jan 13 20:35:22.029031 containerd[1614]: time="2025-01-13T20:35:22.028986374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.029357 containerd[1614]: time="2025-01-13T20:35:22.029251134Z" level=info msg="RemovePodSandbox \"3a7656498c048c39166aa7b3d7c2c9cb3eb9495fc05bca8af7eeee4c6ce1154d\" returns successfully" Jan 13 20:35:22.029893 containerd[1614]: time="2025-01-13T20:35:22.029856134Z" level=info msg="StopPodSandbox for \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\"" Jan 13 20:35:22.030062 containerd[1614]: time="2025-01-13T20:35:22.030037413Z" level=info msg="TearDown network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" successfully" Jan 13 20:35:22.030102 containerd[1614]: time="2025-01-13T20:35:22.030065573Z" level=info msg="StopPodSandbox for \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" returns successfully" Jan 13 20:35:22.031171 containerd[1614]: time="2025-01-13T20:35:22.031098492Z" level=info msg="RemovePodSandbox for \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\"" Jan 13 20:35:22.031271 containerd[1614]: time="2025-01-13T20:35:22.031184212Z" level=info msg="Forcibly stopping sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\"" Jan 13 20:35:22.031368 containerd[1614]: time="2025-01-13T20:35:22.031342932Z" level=info msg="TearDown network for sandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" successfully" Jan 13 20:35:22.036114 containerd[1614]: time="2025-01-13T20:35:22.036068168Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.036272 containerd[1614]: time="2025-01-13T20:35:22.036142848Z" level=info msg="RemovePodSandbox \"c4a8891fa34c6b85e41d36ddfb6ee62b71cf8b6edf0667c50a853a1efb2663a9\" returns successfully" Jan 13 20:35:22.036642 containerd[1614]: time="2025-01-13T20:35:22.036603808Z" level=info msg="StopPodSandbox for \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\"" Jan 13 20:35:22.036729 containerd[1614]: time="2025-01-13T20:35:22.036704528Z" level=info msg="TearDown network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\" successfully" Jan 13 20:35:22.036729 containerd[1614]: time="2025-01-13T20:35:22.036718808Z" level=info msg="StopPodSandbox for \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\" returns successfully" Jan 13 20:35:22.037111 containerd[1614]: time="2025-01-13T20:35:22.037080168Z" level=info msg="RemovePodSandbox for \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\"" Jan 13 20:35:22.037111 containerd[1614]: time="2025-01-13T20:35:22.037110928Z" level=info msg="Forcibly stopping sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\"" Jan 13 20:35:22.037370 containerd[1614]: time="2025-01-13T20:35:22.037170608Z" level=info msg="TearDown network for sandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\" successfully" Jan 13 20:35:22.041558 containerd[1614]: time="2025-01-13T20:35:22.041468564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.041674 containerd[1614]: time="2025-01-13T20:35:22.041610044Z" level=info msg="RemovePodSandbox \"bff1260b6224c845be0e9cbcdc67a76faf4d63ae60d2b2ca39f94abca4270367\" returns successfully" Jan 13 20:35:22.042115 containerd[1614]: time="2025-01-13T20:35:22.042090763Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:35:22.042249 containerd[1614]: time="2025-01-13T20:35:22.042230723Z" level=info msg="TearDown network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" successfully" Jan 13 20:35:22.042249 containerd[1614]: time="2025-01-13T20:35:22.042248083Z" level=info msg="StopPodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" returns successfully" Jan 13 20:35:22.042905 containerd[1614]: time="2025-01-13T20:35:22.042593523Z" level=info msg="RemovePodSandbox for \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:35:22.042905 containerd[1614]: time="2025-01-13T20:35:22.042622443Z" level=info msg="Forcibly stopping sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\"" Jan 13 20:35:22.042905 containerd[1614]: time="2025-01-13T20:35:22.042693003Z" level=info msg="TearDown network for sandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" successfully" Jan 13 20:35:22.046506 containerd[1614]: time="2025-01-13T20:35:22.046435080Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.046858 containerd[1614]: time="2025-01-13T20:35:22.046696440Z" level=info msg="RemovePodSandbox \"de8ba0bf347da31b505549a5b2668840075cb2a34573ed545b7aacee16d1ad91\" returns successfully" Jan 13 20:35:22.047577 containerd[1614]: time="2025-01-13T20:35:22.047495799Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" Jan 13 20:35:22.047716 containerd[1614]: time="2025-01-13T20:35:22.047678839Z" level=info msg="TearDown network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" successfully" Jan 13 20:35:22.047807 containerd[1614]: time="2025-01-13T20:35:22.047711999Z" level=info msg="StopPodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" returns successfully" Jan 13 20:35:22.048743 containerd[1614]: time="2025-01-13T20:35:22.048664958Z" level=info msg="RemovePodSandbox for \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" Jan 13 20:35:22.048861 containerd[1614]: time="2025-01-13T20:35:22.048744198Z" level=info msg="Forcibly stopping sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\"" Jan 13 20:35:22.049378 containerd[1614]: time="2025-01-13T20:35:22.048906558Z" level=info msg="TearDown network for sandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" successfully" Jan 13 20:35:22.053875 containerd[1614]: time="2025-01-13T20:35:22.053826154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.054237 containerd[1614]: time="2025-01-13T20:35:22.053933114Z" level=info msg="RemovePodSandbox \"a4a02eddd0d219765438ef99443540caa5f54f4fbc6744b9bce10ef05361eeac\" returns successfully" Jan 13 20:35:22.054827 containerd[1614]: time="2025-01-13T20:35:22.054766713Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\"" Jan 13 20:35:22.054917 containerd[1614]: time="2025-01-13T20:35:22.054878393Z" level=info msg="TearDown network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" successfully" Jan 13 20:35:22.054917 containerd[1614]: time="2025-01-13T20:35:22.054889753Z" level=info msg="StopPodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" returns successfully" Jan 13 20:35:22.055597 containerd[1614]: time="2025-01-13T20:35:22.055502753Z" level=info msg="RemovePodSandbox for \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\"" Jan 13 20:35:22.055597 containerd[1614]: time="2025-01-13T20:35:22.055538593Z" level=info msg="Forcibly stopping sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\"" Jan 13 20:35:22.056074 containerd[1614]: time="2025-01-13T20:35:22.056046472Z" level=info msg="TearDown network for sandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" successfully" Jan 13 20:35:22.063326 containerd[1614]: time="2025-01-13T20:35:22.063265666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.063447 containerd[1614]: time="2025-01-13T20:35:22.063376906Z" level=info msg="RemovePodSandbox \"e4bbf8f3df604bf2aac2edadf944f6e0dd511b1016d2af4309c35c7fdea6a2fa\" returns successfully" Jan 13 20:35:22.064096 containerd[1614]: time="2025-01-13T20:35:22.063864786Z" level=info msg="StopPodSandbox for \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\"" Jan 13 20:35:22.064096 containerd[1614]: time="2025-01-13T20:35:22.063985066Z" level=info msg="TearDown network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" successfully" Jan 13 20:35:22.064096 containerd[1614]: time="2025-01-13T20:35:22.063996866Z" level=info msg="StopPodSandbox for \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" returns successfully" Jan 13 20:35:22.064587 containerd[1614]: time="2025-01-13T20:35:22.064356825Z" level=info msg="RemovePodSandbox for \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\"" Jan 13 20:35:22.064587 containerd[1614]: time="2025-01-13T20:35:22.064399665Z" level=info msg="Forcibly stopping sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\"" Jan 13 20:35:22.064587 containerd[1614]: time="2025-01-13T20:35:22.064488225Z" level=info msg="TearDown network for sandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" successfully" Jan 13 20:35:22.068982 containerd[1614]: time="2025-01-13T20:35:22.068938342Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.069096 containerd[1614]: time="2025-01-13T20:35:22.069050781Z" level=info msg="RemovePodSandbox \"8ae2eb664557b5ffaca434a541b7e89629f94fc6fd6cb1ee493c35f578f1e87c\" returns successfully" Jan 13 20:35:22.069542 containerd[1614]: time="2025-01-13T20:35:22.069488061Z" level=info msg="StopPodSandbox for \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\"" Jan 13 20:35:22.069624 containerd[1614]: time="2025-01-13T20:35:22.069605781Z" level=info msg="TearDown network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\" successfully" Jan 13 20:35:22.069624 containerd[1614]: time="2025-01-13T20:35:22.069618301Z" level=info msg="StopPodSandbox for \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\" returns successfully" Jan 13 20:35:22.069980 containerd[1614]: time="2025-01-13T20:35:22.069919581Z" level=info msg="RemovePodSandbox for \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\"" Jan 13 20:35:22.071245 containerd[1614]: time="2025-01-13T20:35:22.070125741Z" level=info msg="Forcibly stopping sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\"" Jan 13 20:35:22.071245 containerd[1614]: time="2025-01-13T20:35:22.070293820Z" level=info msg="TearDown network for sandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\" successfully" Jan 13 20:35:22.074469 containerd[1614]: time="2025-01-13T20:35:22.074430297Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.074629 containerd[1614]: time="2025-01-13T20:35:22.074611697Z" level=info msg="RemovePodSandbox \"99b0f8991ef50b1af914aa14f14906f340c607c65420667ad816954caba0bb35\" returns successfully" Jan 13 20:35:22.075276 containerd[1614]: time="2025-01-13T20:35:22.075184056Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:35:22.075509 containerd[1614]: time="2025-01-13T20:35:22.075478736Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:35:22.075546 containerd[1614]: time="2025-01-13T20:35:22.075522896Z" level=info msg="StopPodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:35:22.075874 containerd[1614]: time="2025-01-13T20:35:22.075852016Z" level=info msg="RemovePodSandbox for \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:35:22.076822 containerd[1614]: time="2025-01-13T20:35:22.075991096Z" level=info msg="Forcibly stopping sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\"" Jan 13 20:35:22.076822 containerd[1614]: time="2025-01-13T20:35:22.076070976Z" level=info msg="TearDown network for sandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" successfully" Jan 13 20:35:22.079618 containerd[1614]: time="2025-01-13T20:35:22.079581773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.079752 containerd[1614]: time="2025-01-13T20:35:22.079737573Z" level=info msg="RemovePodSandbox \"3e4a2acc1b8d0e412479bb4abc631a1229b6da9aec58d4efcdda7d3c4ddbec18\" returns successfully" Jan 13 20:35:22.080172 containerd[1614]: time="2025-01-13T20:35:22.080140252Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:35:22.080402 containerd[1614]: time="2025-01-13T20:35:22.080376892Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:35:22.080442 containerd[1614]: time="2025-01-13T20:35:22.080401652Z" level=info msg="StopPodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:35:22.081149 containerd[1614]: time="2025-01-13T20:35:22.080704132Z" level=info msg="RemovePodSandbox for \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:35:22.081149 containerd[1614]: time="2025-01-13T20:35:22.080729732Z" level=info msg="Forcibly stopping sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\"" Jan 13 20:35:22.081149 containerd[1614]: time="2025-01-13T20:35:22.080792012Z" level=info msg="TearDown network for sandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" successfully" Jan 13 20:35:22.084267 containerd[1614]: time="2025-01-13T20:35:22.084196409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.084504 containerd[1614]: time="2025-01-13T20:35:22.084465929Z" level=info msg="RemovePodSandbox \"7cd48ce8fbb3086026fc309b751d983f8cea37d3ecb842459285d4aa2239d19d\" returns successfully" Jan 13 20:35:22.084948 containerd[1614]: time="2025-01-13T20:35:22.084914288Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:35:22.085046 containerd[1614]: time="2025-01-13T20:35:22.085030488Z" level=info msg="TearDown network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" successfully" Jan 13 20:35:22.085082 containerd[1614]: time="2025-01-13T20:35:22.085044368Z" level=info msg="StopPodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" returns successfully" Jan 13 20:35:22.086975 containerd[1614]: time="2025-01-13T20:35:22.085419048Z" level=info msg="RemovePodSandbox for \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:35:22.086975 containerd[1614]: time="2025-01-13T20:35:22.085478528Z" level=info msg="Forcibly stopping sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\"" Jan 13 20:35:22.086975 containerd[1614]: time="2025-01-13T20:35:22.085600368Z" level=info msg="TearDown network for sandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" successfully" Jan 13 20:35:22.088768 containerd[1614]: time="2025-01-13T20:35:22.088733845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.088907 containerd[1614]: time="2025-01-13T20:35:22.088890765Z" level=info msg="RemovePodSandbox \"0a3b55ce23ec0f83e59de47da0dfd7984099ff9c97f5d4a37139125cc6900db9\" returns successfully" Jan 13 20:35:22.089516 containerd[1614]: time="2025-01-13T20:35:22.089472725Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" Jan 13 20:35:22.089897 containerd[1614]: time="2025-01-13T20:35:22.089871524Z" level=info msg="TearDown network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" successfully" Jan 13 20:35:22.089943 containerd[1614]: time="2025-01-13T20:35:22.089901204Z" level=info msg="StopPodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" returns successfully" Jan 13 20:35:22.090551 containerd[1614]: time="2025-01-13T20:35:22.090405004Z" level=info msg="RemovePodSandbox for \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" Jan 13 20:35:22.090551 containerd[1614]: time="2025-01-13T20:35:22.090520764Z" level=info msg="Forcibly stopping sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\"" Jan 13 20:35:22.090633 containerd[1614]: time="2025-01-13T20:35:22.090620084Z" level=info msg="TearDown network for sandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" successfully" Jan 13 20:35:22.094290 containerd[1614]: time="2025-01-13T20:35:22.094190241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.094358 containerd[1614]: time="2025-01-13T20:35:22.094339041Z" level=info msg="RemovePodSandbox \"8d16f9b3737414af81908e0c4eefb026a2a4f40166eb8ced15ef240c1218e353\" returns successfully" Jan 13 20:35:22.094857 containerd[1614]: time="2025-01-13T20:35:22.094826240Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\"" Jan 13 20:35:22.095158 containerd[1614]: time="2025-01-13T20:35:22.095060800Z" level=info msg="TearDown network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" successfully" Jan 13 20:35:22.095158 containerd[1614]: time="2025-01-13T20:35:22.095081200Z" level=info msg="StopPodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" returns successfully" Jan 13 20:35:22.095548 containerd[1614]: time="2025-01-13T20:35:22.095475520Z" level=info msg="RemovePodSandbox for \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\"" Jan 13 20:35:22.095631 containerd[1614]: time="2025-01-13T20:35:22.095507160Z" level=info msg="Forcibly stopping sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\"" Jan 13 20:35:22.095665 containerd[1614]: time="2025-01-13T20:35:22.095636680Z" level=info msg="TearDown network for sandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" successfully" Jan 13 20:35:22.099024 containerd[1614]: time="2025-01-13T20:35:22.098976397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.099446 containerd[1614]: time="2025-01-13T20:35:22.099040037Z" level=info msg="RemovePodSandbox \"90c05d437fa8a004cae9bc390499a4a381d5717fc8cf411ba51201bccff9a6b8\" returns successfully" Jan 13 20:35:22.099579 containerd[1614]: time="2025-01-13T20:35:22.099544357Z" level=info msg="StopPodSandbox for \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\"" Jan 13 20:35:22.099714 containerd[1614]: time="2025-01-13T20:35:22.099653876Z" level=info msg="TearDown network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" successfully" Jan 13 20:35:22.099909 containerd[1614]: time="2025-01-13T20:35:22.099748556Z" level=info msg="StopPodSandbox for \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" returns successfully" Jan 13 20:35:22.101956 containerd[1614]: time="2025-01-13T20:35:22.100334876Z" level=info msg="RemovePodSandbox for \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\"" Jan 13 20:35:22.101956 containerd[1614]: time="2025-01-13T20:35:22.100388196Z" level=info msg="Forcibly stopping sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\"" Jan 13 20:35:22.101956 containerd[1614]: time="2025-01-13T20:35:22.100502156Z" level=info msg="TearDown network for sandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" successfully" Jan 13 20:35:22.105327 containerd[1614]: time="2025-01-13T20:35:22.105159192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.105692 containerd[1614]: time="2025-01-13T20:35:22.105650552Z" level=info msg="RemovePodSandbox \"181ecc6cb08f0a57d025f0299c29c99513bbf469a6718c66df86fbf5fb3bdcd2\" returns successfully" Jan 13 20:35:22.106469 containerd[1614]: time="2025-01-13T20:35:22.106424351Z" level=info msg="StopPodSandbox for \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\"" Jan 13 20:35:22.106608 containerd[1614]: time="2025-01-13T20:35:22.106582831Z" level=info msg="TearDown network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\" successfully" Jan 13 20:35:22.106661 containerd[1614]: time="2025-01-13T20:35:22.106609591Z" level=info msg="StopPodSandbox for \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\" returns successfully" Jan 13 20:35:22.107308 containerd[1614]: time="2025-01-13T20:35:22.107273350Z" level=info msg="RemovePodSandbox for \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\"" Jan 13 20:35:22.107366 containerd[1614]: time="2025-01-13T20:35:22.107316710Z" level=info msg="Forcibly stopping sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\"" Jan 13 20:35:22.107446 containerd[1614]: time="2025-01-13T20:35:22.107424590Z" level=info msg="TearDown network for sandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\" successfully" Jan 13 20:35:22.112073 containerd[1614]: time="2025-01-13T20:35:22.111991666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.112073 containerd[1614]: time="2025-01-13T20:35:22.112053786Z" level=info msg="RemovePodSandbox \"132a114f613e133482aae87ccd3e8afbc9646152092060b91730b1507c536328\" returns successfully" Jan 13 20:35:22.112678 containerd[1614]: time="2025-01-13T20:35:22.112594906Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:35:22.113046 containerd[1614]: time="2025-01-13T20:35:22.112697586Z" level=info msg="TearDown network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" successfully" Jan 13 20:35:22.113046 containerd[1614]: time="2025-01-13T20:35:22.112709266Z" level=info msg="StopPodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" returns successfully" Jan 13 20:35:22.113270 containerd[1614]: time="2025-01-13T20:35:22.113132305Z" level=info msg="RemovePodSandbox for \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:35:22.113270 containerd[1614]: time="2025-01-13T20:35:22.113155105Z" level=info msg="Forcibly stopping sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\"" Jan 13 20:35:22.113410 containerd[1614]: time="2025-01-13T20:35:22.113261265Z" level=info msg="TearDown network for sandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" successfully" Jan 13 20:35:22.116818 containerd[1614]: time="2025-01-13T20:35:22.116751822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.116818 containerd[1614]: time="2025-01-13T20:35:22.116820102Z" level=info msg="RemovePodSandbox \"b87bae7361bb1ba1799d9c1f44fd5fcd507de4bd59a1cc95909d9d646e248166\" returns successfully" Jan 13 20:35:22.117647 containerd[1614]: time="2025-01-13T20:35:22.117569942Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" Jan 13 20:35:22.117750 containerd[1614]: time="2025-01-13T20:35:22.117690142Z" level=info msg="TearDown network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" successfully" Jan 13 20:35:22.117750 containerd[1614]: time="2025-01-13T20:35:22.117701942Z" level=info msg="StopPodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" returns successfully" Jan 13 20:35:22.118149 containerd[1614]: time="2025-01-13T20:35:22.118102061Z" level=info msg="RemovePodSandbox for \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" Jan 13 20:35:22.118149 containerd[1614]: time="2025-01-13T20:35:22.118131381Z" level=info msg="Forcibly stopping sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\"" Jan 13 20:35:22.118352 containerd[1614]: time="2025-01-13T20:35:22.118219221Z" level=info msg="TearDown network for sandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" successfully" Jan 13 20:35:22.121444 containerd[1614]: time="2025-01-13T20:35:22.121376099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.121444 containerd[1614]: time="2025-01-13T20:35:22.121443459Z" level=info msg="RemovePodSandbox \"496d8b41ff3fa544a7b889243ab774ce4cc8103be4b3e9c242a5c5798849f117\" returns successfully" Jan 13 20:35:22.122020 containerd[1614]: time="2025-01-13T20:35:22.121976458Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\"" Jan 13 20:35:22.122145 containerd[1614]: time="2025-01-13T20:35:22.122075658Z" level=info msg="TearDown network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" successfully" Jan 13 20:35:22.122145 containerd[1614]: time="2025-01-13T20:35:22.122086738Z" level=info msg="StopPodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" returns successfully" Jan 13 20:35:22.122764 containerd[1614]: time="2025-01-13T20:35:22.122718818Z" level=info msg="RemovePodSandbox for \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\"" Jan 13 20:35:22.122764 containerd[1614]: time="2025-01-13T20:35:22.122752338Z" level=info msg="Forcibly stopping sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\"" Jan 13 20:35:22.122890 containerd[1614]: time="2025-01-13T20:35:22.122831817Z" level=info msg="TearDown network for sandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" successfully" Jan 13 20:35:22.126043 containerd[1614]: time="2025-01-13T20:35:22.126008255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.126154 containerd[1614]: time="2025-01-13T20:35:22.126065535Z" level=info msg="RemovePodSandbox \"8c4e61b316b59442045ad03b958111c2d05e2037de8f858ab817ad2313aee565\" returns successfully" Jan 13 20:35:22.126600 containerd[1614]: time="2025-01-13T20:35:22.126539054Z" level=info msg="StopPodSandbox for \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\"" Jan 13 20:35:22.126670 containerd[1614]: time="2025-01-13T20:35:22.126635214Z" level=info msg="TearDown network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" successfully" Jan 13 20:35:22.126670 containerd[1614]: time="2025-01-13T20:35:22.126645614Z" level=info msg="StopPodSandbox for \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" returns successfully" Jan 13 20:35:22.127081 containerd[1614]: time="2025-01-13T20:35:22.127053574Z" level=info msg="RemovePodSandbox for \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\"" Jan 13 20:35:22.127081 containerd[1614]: time="2025-01-13T20:35:22.127082214Z" level=info msg="Forcibly stopping sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\"" Jan 13 20:35:22.127266 containerd[1614]: time="2025-01-13T20:35:22.127139574Z" level=info msg="TearDown network for sandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" successfully" Jan 13 20:35:22.131334 containerd[1614]: time="2025-01-13T20:35:22.131260611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.131465 containerd[1614]: time="2025-01-13T20:35:22.131380290Z" level=info msg="RemovePodSandbox \"782d79c8719ab36e1dc97e23b35064ebe4f00662302ef7058be4d1df85c61147\" returns successfully" Jan 13 20:35:22.132052 containerd[1614]: time="2025-01-13T20:35:22.131945370Z" level=info msg="StopPodSandbox for \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\"" Jan 13 20:35:22.132170 containerd[1614]: time="2025-01-13T20:35:22.132127250Z" level=info msg="TearDown network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\" successfully" Jan 13 20:35:22.132170 containerd[1614]: time="2025-01-13T20:35:22.132142250Z" level=info msg="StopPodSandbox for \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\" returns successfully" Jan 13 20:35:22.133288 containerd[1614]: time="2025-01-13T20:35:22.132652489Z" level=info msg="RemovePodSandbox for \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\"" Jan 13 20:35:22.133288 containerd[1614]: time="2025-01-13T20:35:22.132706249Z" level=info msg="Forcibly stopping sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\"" Jan 13 20:35:22.133288 containerd[1614]: time="2025-01-13T20:35:22.132842929Z" level=info msg="TearDown network for sandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\" successfully" Jan 13 20:35:22.137352 containerd[1614]: time="2025-01-13T20:35:22.137298446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.137467 containerd[1614]: time="2025-01-13T20:35:22.137379966Z" level=info msg="RemovePodSandbox \"bbb1b8a23c5cec305f7ca62a9f9b0f48d199a40b1827c74ec626a2d84e11f8f9\" returns successfully" Jan 13 20:35:22.138430 containerd[1614]: time="2025-01-13T20:35:22.138223925Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:35:22.138430 containerd[1614]: time="2025-01-13T20:35:22.138339125Z" level=info msg="TearDown network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" successfully" Jan 13 20:35:22.138430 containerd[1614]: time="2025-01-13T20:35:22.138349725Z" level=info msg="StopPodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" returns successfully" Jan 13 20:35:22.139274 containerd[1614]: time="2025-01-13T20:35:22.138959284Z" level=info msg="RemovePodSandbox for \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:35:22.139274 containerd[1614]: time="2025-01-13T20:35:22.138988964Z" level=info msg="Forcibly stopping sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\"" Jan 13 20:35:22.139274 containerd[1614]: time="2025-01-13T20:35:22.139055844Z" level=info msg="TearDown network for sandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" successfully" Jan 13 20:35:22.143313 containerd[1614]: time="2025-01-13T20:35:22.143190761Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.143313 containerd[1614]: time="2025-01-13T20:35:22.143312761Z" level=info msg="RemovePodSandbox \"b9c351f65001d831bb350d2f224a4b36df68a5dbc95b5ad8940e7f06819a88a2\" returns successfully" Jan 13 20:35:22.144002 containerd[1614]: time="2025-01-13T20:35:22.143928640Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" Jan 13 20:35:22.144104 containerd[1614]: time="2025-01-13T20:35:22.144034760Z" level=info msg="TearDown network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" successfully" Jan 13 20:35:22.144104 containerd[1614]: time="2025-01-13T20:35:22.144045760Z" level=info msg="StopPodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" returns successfully" Jan 13 20:35:22.144436 containerd[1614]: time="2025-01-13T20:35:22.144396520Z" level=info msg="RemovePodSandbox for \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" Jan 13 20:35:22.144436 containerd[1614]: time="2025-01-13T20:35:22.144434520Z" level=info msg="Forcibly stopping sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\"" Jan 13 20:35:22.144570 containerd[1614]: time="2025-01-13T20:35:22.144507360Z" level=info msg="TearDown network for sandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" successfully" Jan 13 20:35:22.148660 containerd[1614]: time="2025-01-13T20:35:22.148557836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.148660 containerd[1614]: time="2025-01-13T20:35:22.148640716Z" level=info msg="RemovePodSandbox \"56e9858eef16c6c62313f06351cafad27aae4bcf377104b1f2fd70b270d31934\" returns successfully" Jan 13 20:35:22.149567 containerd[1614]: time="2025-01-13T20:35:22.149187916Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\"" Jan 13 20:35:22.149567 containerd[1614]: time="2025-01-13T20:35:22.149407076Z" level=info msg="TearDown network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" successfully" Jan 13 20:35:22.149567 containerd[1614]: time="2025-01-13T20:35:22.149428076Z" level=info msg="StopPodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" returns successfully" Jan 13 20:35:22.149894 containerd[1614]: time="2025-01-13T20:35:22.149757995Z" level=info msg="RemovePodSandbox for \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\"" Jan 13 20:35:22.149894 containerd[1614]: time="2025-01-13T20:35:22.149800035Z" level=info msg="Forcibly stopping sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\"" Jan 13 20:35:22.149894 containerd[1614]: time="2025-01-13T20:35:22.149869515Z" level=info msg="TearDown network for sandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" successfully" Jan 13 20:35:22.153350 containerd[1614]: time="2025-01-13T20:35:22.153289273Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.153350 containerd[1614]: time="2025-01-13T20:35:22.153362833Z" level=info msg="RemovePodSandbox \"0780e1da0653fd94f75ff241e4ed3babdb56c5a9b38ff5eeb473063dc7a56764\" returns successfully" Jan 13 20:35:22.154107 containerd[1614]: time="2025-01-13T20:35:22.153810232Z" level=info msg="StopPodSandbox for \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\"" Jan 13 20:35:22.154107 containerd[1614]: time="2025-01-13T20:35:22.153972992Z" level=info msg="TearDown network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" successfully" Jan 13 20:35:22.154107 containerd[1614]: time="2025-01-13T20:35:22.153990112Z" level=info msg="StopPodSandbox for \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" returns successfully" Jan 13 20:35:22.154487 containerd[1614]: time="2025-01-13T20:35:22.154401352Z" level=info msg="RemovePodSandbox for \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\"" Jan 13 20:35:22.154487 containerd[1614]: time="2025-01-13T20:35:22.154428552Z" level=info msg="Forcibly stopping sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\"" Jan 13 20:35:22.154582 containerd[1614]: time="2025-01-13T20:35:22.154496232Z" level=info msg="TearDown network for sandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" successfully" Jan 13 20:35:22.157987 containerd[1614]: time="2025-01-13T20:35:22.157766709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.157987 containerd[1614]: time="2025-01-13T20:35:22.157849189Z" level=info msg="RemovePodSandbox \"482df31c31ede80f8219ba4229f5f097dacea07274c5e44193220ae27c4bcd3b\" returns successfully" Jan 13 20:35:22.158639 containerd[1614]: time="2025-01-13T20:35:22.158369988Z" level=info msg="StopPodSandbox for \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\"" Jan 13 20:35:22.159081 containerd[1614]: time="2025-01-13T20:35:22.158983388Z" level=info msg="TearDown network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\" successfully" Jan 13 20:35:22.159081 containerd[1614]: time="2025-01-13T20:35:22.159004388Z" level=info msg="StopPodSandbox for \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\" returns successfully" Jan 13 20:35:22.159673 containerd[1614]: time="2025-01-13T20:35:22.159415028Z" level=info msg="RemovePodSandbox for \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\"" Jan 13 20:35:22.159673 containerd[1614]: time="2025-01-13T20:35:22.159447308Z" level=info msg="Forcibly stopping sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\"" Jan 13 20:35:22.159916 containerd[1614]: time="2025-01-13T20:35:22.159892867Z" level=info msg="TearDown network for sandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\" successfully" Jan 13 20:35:22.164168 containerd[1614]: time="2025-01-13T20:35:22.163973744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.164168 containerd[1614]: time="2025-01-13T20:35:22.164048424Z" level=info msg="RemovePodSandbox \"d56c7d0d71eb73c7866002e95acff1e1b317fb33407caf682d8356b9655b8d32\" returns successfully" Jan 13 20:35:22.164688 containerd[1614]: time="2025-01-13T20:35:22.164654903Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:35:22.164802 containerd[1614]: time="2025-01-13T20:35:22.164780703Z" level=info msg="TearDown network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" successfully" Jan 13 20:35:22.164839 containerd[1614]: time="2025-01-13T20:35:22.164799903Z" level=info msg="StopPodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" returns successfully" Jan 13 20:35:22.167024 containerd[1614]: time="2025-01-13T20:35:22.165532223Z" level=info msg="RemovePodSandbox for \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:35:22.167024 containerd[1614]: time="2025-01-13T20:35:22.165565103Z" level=info msg="Forcibly stopping sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\"" Jan 13 20:35:22.167024 containerd[1614]: time="2025-01-13T20:35:22.165650182Z" level=info msg="TearDown network for sandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" successfully" Jan 13 20:35:22.169923 containerd[1614]: time="2025-01-13T20:35:22.169879339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.170101 containerd[1614]: time="2025-01-13T20:35:22.170080259Z" level=info msg="RemovePodSandbox \"3c174c3bfab3c110761b0df829da348d6980df78063e71c0a3a7dc0f2b08bb62\" returns successfully" Jan 13 20:35:22.170719 containerd[1614]: time="2025-01-13T20:35:22.170695858Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" Jan 13 20:35:22.170945 containerd[1614]: time="2025-01-13T20:35:22.170930018Z" level=info msg="TearDown network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" successfully" Jan 13 20:35:22.171012 containerd[1614]: time="2025-01-13T20:35:22.170998978Z" level=info msg="StopPodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" returns successfully" Jan 13 20:35:22.171427 containerd[1614]: time="2025-01-13T20:35:22.171385738Z" level=info msg="RemovePodSandbox for \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" Jan 13 20:35:22.171510 containerd[1614]: time="2025-01-13T20:35:22.171495338Z" level=info msg="Forcibly stopping sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\"" Jan 13 20:35:22.171625 containerd[1614]: time="2025-01-13T20:35:22.171611178Z" level=info msg="TearDown network for sandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" successfully" Jan 13 20:35:22.174582 containerd[1614]: time="2025-01-13T20:35:22.174541735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.174807 containerd[1614]: time="2025-01-13T20:35:22.174787815Z" level=info msg="RemovePodSandbox \"06e0d08ec71b03ab041cbff6702f1a75939ca062efd14215e43a0ec51a02cd5c\" returns successfully" Jan 13 20:35:22.175496 containerd[1614]: time="2025-01-13T20:35:22.175464214Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\"" Jan 13 20:35:22.175609 containerd[1614]: time="2025-01-13T20:35:22.175588894Z" level=info msg="TearDown network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" successfully" Jan 13 20:35:22.175646 containerd[1614]: time="2025-01-13T20:35:22.175611134Z" level=info msg="StopPodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" returns successfully" Jan 13 20:35:22.176083 containerd[1614]: time="2025-01-13T20:35:22.176044854Z" level=info msg="RemovePodSandbox for \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\"" Jan 13 20:35:22.176083 containerd[1614]: time="2025-01-13T20:35:22.176065534Z" level=info msg="Forcibly stopping sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\"" Jan 13 20:35:22.176908 containerd[1614]: time="2025-01-13T20:35:22.176313774Z" level=info msg="TearDown network for sandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" successfully" Jan 13 20:35:22.179612 containerd[1614]: time="2025-01-13T20:35:22.179560211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.179783 containerd[1614]: time="2025-01-13T20:35:22.179762371Z" level=info msg="RemovePodSandbox \"d1d44f51b46dcdf60d62090fe897f285546dc099524ac0f1b43a0a88eea0e04f\" returns successfully" Jan 13 20:35:22.180337 containerd[1614]: time="2025-01-13T20:35:22.180306410Z" level=info msg="StopPodSandbox for \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\"" Jan 13 20:35:22.180514 containerd[1614]: time="2025-01-13T20:35:22.180497890Z" level=info msg="TearDown network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" successfully" Jan 13 20:35:22.180579 containerd[1614]: time="2025-01-13T20:35:22.180566250Z" level=info msg="StopPodSandbox for \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" returns successfully" Jan 13 20:35:22.181010 containerd[1614]: time="2025-01-13T20:35:22.180967370Z" level=info msg="RemovePodSandbox for \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\"" Jan 13 20:35:22.181050 containerd[1614]: time="2025-01-13T20:35:22.181017130Z" level=info msg="Forcibly stopping sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\"" Jan 13 20:35:22.181126 containerd[1614]: time="2025-01-13T20:35:22.181107650Z" level=info msg="TearDown network for sandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" successfully" Jan 13 20:35:22.186886 containerd[1614]: time="2025-01-13T20:35:22.186816965Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.187051 containerd[1614]: time="2025-01-13T20:35:22.186907965Z" level=info msg="RemovePodSandbox \"0a3d2ff41c2160490adf26f69b9279c6564ee5c6072142ec5d1694b2b5b7d05d\" returns successfully" Jan 13 20:35:22.189245 containerd[1614]: time="2025-01-13T20:35:22.189117323Z" level=info msg="StopPodSandbox for \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\"" Jan 13 20:35:22.189345 containerd[1614]: time="2025-01-13T20:35:22.189323803Z" level=info msg="TearDown network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\" successfully" Jan 13 20:35:22.189449 containerd[1614]: time="2025-01-13T20:35:22.189346323Z" level=info msg="StopPodSandbox for \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\" returns successfully" Jan 13 20:35:22.189865 containerd[1614]: time="2025-01-13T20:35:22.189824083Z" level=info msg="RemovePodSandbox for \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\"" Jan 13 20:35:22.189923 containerd[1614]: time="2025-01-13T20:35:22.189881403Z" level=info msg="Forcibly stopping sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\"" Jan 13 20:35:22.190027 containerd[1614]: time="2025-01-13T20:35:22.190001843Z" level=info msg="TearDown network for sandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\" successfully" Jan 13 20:35:22.194453 containerd[1614]: time="2025-01-13T20:35:22.194389839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:35:22.194453 containerd[1614]: time="2025-01-13T20:35:22.194463519Z" level=info msg="RemovePodSandbox \"b7fb397a8fde7e7a7708d282f4d3b2e59715d554f62fdd77ff147adc89c3326a\" returns successfully" Jan 13 20:35:22.385665 kubelet[3072]: I0113 20:35:22.385011 3072 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:35:40.532601 update_engine[1593]: I20250113 20:35:40.531803 1593 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 13 20:35:40.532601 update_engine[1593]: I20250113 20:35:40.531862 1593 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 13 20:35:40.532601 update_engine[1593]: I20250113 20:35:40.532160 1593 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 13 20:35:40.533947 update_engine[1593]: I20250113 20:35:40.533684 1593 omaha_request_params.cc:62] Current group set to stable Jan 13 20:35:40.536879 update_engine[1593]: I20250113 20:35:40.535002 1593 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 13 20:35:40.536879 update_engine[1593]: I20250113 20:35:40.535032 1593 update_attempter.cc:643] Scheduling an action processor start. Jan 13 20:35:40.536879 update_engine[1593]: I20250113 20:35:40.535051 1593 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:35:40.536879 update_engine[1593]: I20250113 20:35:40.535133 1593 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 13 20:35:40.536879 update_engine[1593]: I20250113 20:35:40.535243 1593 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:35:40.536879 update_engine[1593]: I20250113 20:35:40.535254 1593 omaha_request_action.cc:272] Request: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: Jan 13 20:35:40.536879 update_engine[1593]: I20250113 20:35:40.535260 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:35:40.538999 update_engine[1593]: I20250113 20:35:40.538962 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:35:40.539542 update_engine[1593]: I20250113 20:35:40.539511 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:35:40.542001 update_engine[1593]: E20250113 20:35:40.541956 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:35:40.545275 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 13 20:35:40.547831 update_engine[1593]: I20250113 20:35:40.547401 1593 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 13 20:35:50.464571 update_engine[1593]: I20250113 20:35:50.463788 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:35:50.464571 update_engine[1593]: I20250113 20:35:50.464167 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:35:50.464571 update_engine[1593]: I20250113 20:35:50.464560 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:35:50.465124 update_engine[1593]: E20250113 20:35:50.465022 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:35:50.465227 update_engine[1593]: I20250113 20:35:50.465121 1593 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 13 20:36:00.467973 update_engine[1593]: I20250113 20:36:00.467885 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:36:00.468421 update_engine[1593]: I20250113 20:36:00.468153 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:36:00.468493 update_engine[1593]: I20250113 20:36:00.468451 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:36:00.469270 update_engine[1593]: E20250113 20:36:00.468832 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:36:00.469270 update_engine[1593]: I20250113 20:36:00.468909 1593 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 13 20:36:10.469302 update_engine[1593]: I20250113 20:36:10.468849 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:36:10.469302 update_engine[1593]: I20250113 20:36:10.469105 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:36:10.470067 update_engine[1593]: I20250113 20:36:10.469383 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:36:10.470067 update_engine[1593]: E20250113 20:36:10.469861 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:36:10.470067 update_engine[1593]: I20250113 20:36:10.469945 1593 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:36:10.470067 update_engine[1593]: I20250113 20:36:10.469959 1593 omaha_request_action.cc:617] Omaha request response: Jan 13 20:36:10.470067 update_engine[1593]: E20250113 20:36:10.470042 1593 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 13 20:36:10.470067 update_engine[1593]: I20250113 20:36:10.470062 1593 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 13 20:36:10.470067 update_engine[1593]: I20250113 20:36:10.470069 1593 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:36:10.470067 update_engine[1593]: I20250113 20:36:10.470075 1593 update_attempter.cc:306] Processing Done. Jan 13 20:36:10.470912 update_engine[1593]: E20250113 20:36:10.470093 1593 update_attempter.cc:619] Update failed. Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470100 1593 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470106 1593 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470113 1593 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470199 1593 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470255 1593 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470263 1593 omaha_request_action.cc:272] Request: Jan 13 20:36:10.470912 update_engine[1593]: Jan 13 20:36:10.470912 update_engine[1593]: Jan 13 20:36:10.470912 update_engine[1593]: Jan 13 20:36:10.470912 update_engine[1593]: Jan 13 20:36:10.470912 update_engine[1593]: Jan 13 20:36:10.470912 update_engine[1593]: Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470269 1593 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470418 1593 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 13 20:36:10.470912 update_engine[1593]: I20250113 20:36:10.470635 1593 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 13 20:36:10.472438 update_engine[1593]: E20250113 20:36:10.471027 1593 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 13 20:36:10.472438 update_engine[1593]: I20250113 20:36:10.471099 1593 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 13 20:36:10.472438 update_engine[1593]: I20250113 20:36:10.471108 1593 omaha_request_action.cc:617] Omaha request response: Jan 13 20:36:10.472438 update_engine[1593]: I20250113 20:36:10.471117 1593 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:36:10.472438 update_engine[1593]: I20250113 20:36:10.471122 1593 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 13 20:36:10.472438 update_engine[1593]: I20250113 20:36:10.471128 1593 update_attempter.cc:306] Processing Done. Jan 13 20:36:10.472438 update_engine[1593]: I20250113 20:36:10.471135 1593 update_attempter.cc:310] Error event sent. Jan 13 20:36:10.472438 update_engine[1593]: I20250113 20:36:10.471145 1593 update_check_scheduler.cc:74] Next update check in 46m25s Jan 13 20:36:10.473037 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 13 20:36:10.473531 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 13 20:38:27.259833 systemd[1]: run-containerd-runc-k8s.io-4a89064e5c237afda131e396d6f2a366efc2fefbe986a74d131e26ba89d5ac4b-runc.pY6BFN.mount: Deactivated successfully. Jan 13 20:39:14.619779 systemd[1]: Started sshd@7-138.199.152.196:22-147.75.109.163:49254.service - OpenSSH per-connection server daemon (147.75.109.163:49254). Jan 13 20:39:15.620028 sshd[6610]: Accepted publickey for core from 147.75.109.163 port 49254 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:15.622296 sshd-session[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:15.628627 systemd-logind[1591]: New session 8 of user core. Jan 13 20:39:15.640725 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 20:39:16.388304 sshd[6613]: Connection closed by 147.75.109.163 port 49254 Jan 13 20:39:16.389430 sshd-session[6610]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:16.395046 systemd[1]: sshd@7-138.199.152.196:22-147.75.109.163:49254.service: Deactivated successfully. Jan 13 20:39:16.395151 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Jan 13 20:39:16.399117 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 20:39:16.401802 systemd-logind[1591]: Removed session 8. Jan 13 20:39:21.561712 systemd[1]: Started sshd@8-138.199.152.196:22-147.75.109.163:58108.service - OpenSSH per-connection server daemon (147.75.109.163:58108). Jan 13 20:39:22.544112 sshd[6626]: Accepted publickey for core from 147.75.109.163 port 58108 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:22.546140 sshd-session[6626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:22.551884 systemd-logind[1591]: New session 9 of user core. Jan 13 20:39:22.557895 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 20:39:23.299928 sshd[6631]: Connection closed by 147.75.109.163 port 58108 Jan 13 20:39:23.300814 sshd-session[6626]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:23.305501 systemd[1]: sshd@8-138.199.152.196:22-147.75.109.163:58108.service: Deactivated successfully. Jan 13 20:39:23.310501 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 20:39:23.311439 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Jan 13 20:39:23.312529 systemd-logind[1591]: Removed session 9. Jan 13 20:39:28.471744 systemd[1]: Started sshd@9-138.199.152.196:22-147.75.109.163:49638.service - OpenSSH per-connection server daemon (147.75.109.163:49638). Jan 13 20:39:29.460000 sshd[6685]: Accepted publickey for core from 147.75.109.163 port 49638 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:29.462020 sshd-session[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:29.468631 systemd-logind[1591]: New session 10 of user core. Jan 13 20:39:29.475779 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 20:39:30.225524 sshd[6688]: Connection closed by 147.75.109.163 port 49638 Jan 13 20:39:30.225391 sshd-session[6685]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:30.232438 systemd[1]: sshd@9-138.199.152.196:22-147.75.109.163:49638.service: Deactivated successfully. Jan 13 20:39:30.236826 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 20:39:30.238647 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Jan 13 20:39:30.239986 systemd-logind[1591]: Removed session 10. Jan 13 20:39:30.394483 systemd[1]: Started sshd@10-138.199.152.196:22-147.75.109.163:49640.service - OpenSSH per-connection server daemon (147.75.109.163:49640). Jan 13 20:39:31.398248 sshd[6700]: Accepted publickey for core from 147.75.109.163 port 49640 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:31.399879 sshd-session[6700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:31.406505 systemd-logind[1591]: New session 11 of user core. Jan 13 20:39:31.413682 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 20:39:32.202976 sshd[6703]: Connection closed by 147.75.109.163 port 49640 Jan 13 20:39:32.204441 sshd-session[6700]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:32.210944 systemd[1]: sshd@10-138.199.152.196:22-147.75.109.163:49640.service: Deactivated successfully. Jan 13 20:39:32.217140 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 20:39:32.218347 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Jan 13 20:39:32.220155 systemd-logind[1591]: Removed session 11. Jan 13 20:39:32.369982 systemd[1]: Started sshd@11-138.199.152.196:22-147.75.109.163:49654.service - OpenSSH per-connection server daemon (147.75.109.163:49654). Jan 13 20:39:33.347390 sshd[6715]: Accepted publickey for core from 147.75.109.163 port 49654 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:33.349505 sshd-session[6715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:33.355432 systemd-logind[1591]: New session 12 of user core. Jan 13 20:39:33.360687 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 20:39:34.109595 sshd[6719]: Connection closed by 147.75.109.163 port 49654 Jan 13 20:39:34.110557 sshd-session[6715]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:34.116552 systemd[1]: sshd@11-138.199.152.196:22-147.75.109.163:49654.service: Deactivated successfully. Jan 13 20:39:34.120195 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Jan 13 20:39:34.120391 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 20:39:34.123124 systemd-logind[1591]: Removed session 12. Jan 13 20:39:39.275616 systemd[1]: Started sshd@12-138.199.152.196:22-147.75.109.163:47066.service - OpenSSH per-connection server daemon (147.75.109.163:47066). Jan 13 20:39:40.264917 sshd[6732]: Accepted publickey for core from 147.75.109.163 port 47066 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:40.266967 sshd-session[6732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:40.275089 systemd-logind[1591]: New session 13 of user core. Jan 13 20:39:40.278811 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 20:39:41.035275 sshd[6735]: Connection closed by 147.75.109.163 port 47066 Jan 13 20:39:41.037514 sshd-session[6732]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:41.041577 systemd[1]: sshd@12-138.199.152.196:22-147.75.109.163:47066.service: Deactivated successfully. Jan 13 20:39:41.058797 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 20:39:41.058880 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Jan 13 20:39:41.065080 systemd-logind[1591]: Removed session 13. Jan 13 20:39:41.205703 systemd[1]: Started sshd@13-138.199.152.196:22-147.75.109.163:47070.service - OpenSSH per-connection server daemon (147.75.109.163:47070). Jan 13 20:39:42.201181 sshd[6746]: Accepted publickey for core from 147.75.109.163 port 47070 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:42.203302 sshd-session[6746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:42.209499 systemd-logind[1591]: New session 14 of user core. Jan 13 20:39:42.218851 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 20:39:43.068498 sshd[6749]: Connection closed by 147.75.109.163 port 47070 Jan 13 20:39:43.069151 sshd-session[6746]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:43.075047 systemd[1]: sshd@13-138.199.152.196:22-147.75.109.163:47070.service: Deactivated successfully. Jan 13 20:39:43.080228 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 20:39:43.082491 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Jan 13 20:39:43.083982 systemd-logind[1591]: Removed session 14. Jan 13 20:39:43.240854 systemd[1]: Started sshd@14-138.199.152.196:22-147.75.109.163:47086.service - OpenSSH per-connection server daemon (147.75.109.163:47086). Jan 13 20:39:44.236006 sshd[6758]: Accepted publickey for core from 147.75.109.163 port 47086 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:44.238479 sshd-session[6758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:44.244789 systemd-logind[1591]: New session 15 of user core. Jan 13 20:39:44.249982 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 20:39:46.795372 sshd[6761]: Connection closed by 147.75.109.163 port 47086 Jan 13 20:39:46.796499 sshd-session[6758]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:46.803379 systemd[1]: sshd@14-138.199.152.196:22-147.75.109.163:47086.service: Deactivated successfully. Jan 13 20:39:46.807055 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 20:39:46.808684 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Jan 13 20:39:46.810139 systemd-logind[1591]: Removed session 15. Jan 13 20:39:46.960096 systemd[1]: Started sshd@15-138.199.152.196:22-147.75.109.163:47096.service - OpenSSH per-connection server daemon (147.75.109.163:47096). Jan 13 20:39:47.946795 sshd[6797]: Accepted publickey for core from 147.75.109.163 port 47096 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:47.949149 sshd-session[6797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:47.957200 systemd-logind[1591]: New session 16 of user core. Jan 13 20:39:47.959572 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 20:39:48.844263 sshd[6801]: Connection closed by 147.75.109.163 port 47096 Jan 13 20:39:48.845314 sshd-session[6797]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:48.850795 systemd[1]: sshd@15-138.199.152.196:22-147.75.109.163:47096.service: Deactivated successfully. Jan 13 20:39:48.856979 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 20:39:48.860633 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Jan 13 20:39:48.862257 systemd-logind[1591]: Removed session 16. Jan 13 20:39:49.012567 systemd[1]: Started sshd@16-138.199.152.196:22-147.75.109.163:47930.service - OpenSSH per-connection server daemon (147.75.109.163:47930). Jan 13 20:39:49.996107 sshd[6810]: Accepted publickey for core from 147.75.109.163 port 47930 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:49.998373 sshd-session[6810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:50.004117 systemd-logind[1591]: New session 17 of user core. Jan 13 20:39:50.009814 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 20:39:50.760517 sshd[6813]: Connection closed by 147.75.109.163 port 47930 Jan 13 20:39:50.761483 sshd-session[6810]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:50.765965 systemd[1]: sshd@16-138.199.152.196:22-147.75.109.163:47930.service: Deactivated successfully. Jan 13 20:39:50.771444 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 20:39:50.773156 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Jan 13 20:39:50.774810 systemd-logind[1591]: Removed session 17. Jan 13 20:39:55.928111 systemd[1]: Started sshd@17-138.199.152.196:22-147.75.109.163:47936.service - OpenSSH per-connection server daemon (147.75.109.163:47936). Jan 13 20:39:56.914291 sshd[6846]: Accepted publickey for core from 147.75.109.163 port 47936 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:39:56.918345 sshd-session[6846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:39:56.923452 systemd-logind[1591]: New session 18 of user core. Jan 13 20:39:56.930609 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 20:39:57.673244 sshd[6849]: Connection closed by 147.75.109.163 port 47936 Jan 13 20:39:57.676460 sshd-session[6846]: pam_unix(sshd:session): session closed for user core Jan 13 20:39:57.683619 systemd[1]: sshd@17-138.199.152.196:22-147.75.109.163:47936.service: Deactivated successfully. Jan 13 20:39:57.694034 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Jan 13 20:39:57.695574 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 20:39:57.697266 systemd-logind[1591]: Removed session 18. Jan 13 20:40:02.841794 systemd[1]: Started sshd@18-138.199.152.196:22-147.75.109.163:35252.service - OpenSSH per-connection server daemon (147.75.109.163:35252). Jan 13 20:40:03.840061 sshd[6898]: Accepted publickey for core from 147.75.109.163 port 35252 ssh2: RSA SHA256:deYc8PRIzQGYRfQ4zGwGGzIklxWaqt3H8qZh9jD9rBo Jan 13 20:40:03.841805 sshd-session[6898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:40:03.846702 systemd-logind[1591]: New session 19 of user core. Jan 13 20:40:03.851749 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 20:40:04.609448 sshd[6901]: Connection closed by 147.75.109.163 port 35252 Jan 13 20:40:04.610731 sshd-session[6898]: pam_unix(sshd:session): session closed for user core Jan 13 20:40:04.617284 systemd[1]: sshd@18-138.199.152.196:22-147.75.109.163:35252.service: Deactivated successfully. Jan 13 20:40:04.619736 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Jan 13 20:40:04.621188 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 20:40:04.623191 systemd-logind[1591]: Removed session 19.