Oct 8 19:45:57.918188 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:45:57.918214 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 18:25:39 -00 2024 Oct 8 19:45:57.918225 kernel: KASLR enabled Oct 8 19:45:57.918231 kernel: efi: EFI v2.7 by EDK II Oct 8 19:45:57.918238 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Oct 8 19:45:57.918244 kernel: random: crng init done Oct 8 19:45:57.918252 kernel: ACPI: Early table checksum verification disabled Oct 8 19:45:57.918259 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Oct 8 19:45:57.918266 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:45:57.918272 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918281 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918287 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918294 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918301 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918309 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918318 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918325 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918591 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:57.918602 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:45:57.918610 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 8 19:45:57.918617 kernel: NUMA: Failed to initialise from firmware Oct 8 19:45:57.918624 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:45:57.918631 kernel: NUMA: NODE_DATA [mem 0x139821800-0x139826fff] Oct 8 19:45:57.918638 kernel: Zone ranges: Oct 8 19:45:57.918646 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:45:57.918653 kernel: DMA32 empty Oct 8 19:45:57.918664 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 8 19:45:57.918671 kernel: Movable zone start for each node Oct 8 19:45:57.918678 kernel: Early memory node ranges Oct 8 19:45:57.918685 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Oct 8 19:45:57.918692 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Oct 8 19:45:57.918700 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Oct 8 19:45:57.918707 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Oct 8 19:45:57.918714 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Oct 8 19:45:57.918721 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:45:57.918728 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 8 19:45:57.918735 kernel: psci: probing for conduit method from ACPI. Oct 8 19:45:57.918744 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:45:57.918751 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:45:57.918758 kernel: psci: Trusted OS migration not required Oct 8 19:45:57.918769 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:45:57.918776 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:45:57.918784 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:45:57.918793 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:45:57.918801 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:45:57.918808 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:45:57.918816 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:45:57.918824 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:45:57.918831 kernel: CPU features: detected: Spectre-v4 Oct 8 19:45:57.918838 kernel: CPU features: detected: Spectre-BHB Oct 8 19:45:57.918846 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:45:57.918854 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:45:57.918861 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:45:57.918869 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:45:57.918878 kernel: alternatives: applying boot alternatives Oct 8 19:45:57.918887 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:45:57.918895 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:45:57.918903 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:45:57.918911 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:45:57.918919 kernel: Fallback order for Node 0: 0 Oct 8 19:45:57.918926 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Oct 8 19:45:57.918934 kernel: Policy zone: Normal Oct 8 19:45:57.918941 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:45:57.918949 kernel: software IO TLB: area num 2. Oct 8 19:45:57.918957 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Oct 8 19:45:57.918966 kernel: Memory: 3881536K/4096000K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39360K init, 897K bss, 214464K reserved, 0K cma-reserved) Oct 8 19:45:57.918974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:45:57.918981 kernel: trace event string verifier disabled Oct 8 19:45:57.918989 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:45:57.918998 kernel: rcu: RCU event tracing is enabled. Oct 8 19:45:57.919007 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:45:57.919016 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:45:57.919023 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:45:57.919031 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:45:57.919039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:45:57.919046 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:45:57.919055 kernel: GICv3: 256 SPIs implemented Oct 8 19:45:57.919063 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:45:57.919075 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:45:57.919084 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:45:57.919093 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:45:57.919101 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:45:57.919109 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:45:57.919117 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:45:57.919124 kernel: GICv3: using LPI property table @0x00000001000e0000 Oct 8 19:45:57.919132 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Oct 8 19:45:57.919140 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:45:57.919149 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:57.919156 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:45:57.919164 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:45:57.919172 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:45:57.919180 kernel: Console: colour dummy device 80x25 Oct 8 19:45:57.919188 kernel: ACPI: Core revision 20230628 Oct 8 19:45:57.919196 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:45:57.919204 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:45:57.919212 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:45:57.919220 kernel: landlock: Up and running. Oct 8 19:45:57.919228 kernel: SELinux: Initializing. Oct 8 19:45:57.919236 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:45:57.919244 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:45:57.919252 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:45:57.919260 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:45:57.919268 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:45:57.919276 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:45:57.919283 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:45:57.919291 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:45:57.919299 kernel: Remapping and enabling EFI services. Oct 8 19:45:57.919308 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:45:57.919316 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:45:57.919324 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:45:57.919332 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Oct 8 19:45:57.919340 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:57.919347 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:45:57.919355 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:45:57.919363 kernel: SMP: Total of 2 processors activated. Oct 8 19:45:57.919371 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:45:57.919380 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:45:57.919388 kernel: CPU features: detected: Common not Private translations Oct 8 19:45:57.919401 kernel: CPU features: detected: CRC32 instructions Oct 8 19:45:57.919411 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:45:57.919419 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:45:57.919436 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:45:57.919444 kernel: CPU features: detected: Privileged Access Never Oct 8 19:45:57.919453 kernel: CPU features: detected: RAS Extension Support Oct 8 19:45:57.919461 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:45:57.919471 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:45:57.919480 kernel: alternatives: applying system-wide alternatives Oct 8 19:45:57.919488 kernel: devtmpfs: initialized Oct 8 19:45:57.919496 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:45:57.919505 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:45:57.919513 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:45:57.919521 kernel: SMBIOS 3.0.0 present. Oct 8 19:45:57.919531 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 8 19:45:57.919539 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:45:57.919547 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:45:57.919555 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:45:57.919564 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:45:57.919572 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:45:57.919580 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Oct 8 19:45:57.919588 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:45:57.919597 kernel: cpuidle: using governor menu Oct 8 19:45:57.919606 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:45:57.919614 kernel: ASID allocator initialised with 32768 entries Oct 8 19:45:57.919623 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:45:57.919631 kernel: Serial: AMBA PL011 UART driver Oct 8 19:45:57.919639 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:45:57.919647 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:45:57.919656 kernel: Modules: 509024 pages in range for PLT usage Oct 8 19:45:57.919664 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:45:57.919672 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:45:57.919682 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:45:57.919690 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:45:57.919698 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:45:57.919707 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:45:57.919715 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:45:57.919724 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:45:57.919732 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:45:57.919740 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:45:57.919748 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:45:57.919758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:45:57.919766 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:45:57.919774 kernel: ACPI: Interpreter enabled Oct 8 19:45:57.919782 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:45:57.919790 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:45:57.919799 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:45:57.919807 kernel: printk: console [ttyAMA0] enabled Oct 8 19:45:57.919815 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:45:57.919970 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:45:57.920055 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:45:57.920151 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:45:57.920249 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:45:57.920323 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:45:57.920334 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:45:57.920348 kernel: PCI host bridge to bus 0000:00 Oct 8 19:45:57.922515 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:45:57.922658 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:45:57.922728 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:45:57.922793 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:45:57.922884 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:45:57.922968 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Oct 8 19:45:57.923043 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Oct 8 19:45:57.923122 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:45:57.923205 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.923280 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Oct 8 19:45:57.923370 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.923480 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Oct 8 19:45:57.923570 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.923645 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Oct 8 19:45:57.923744 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.923830 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Oct 8 19:45:57.923911 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.923985 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Oct 8 19:45:57.924064 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.924141 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Oct 8 19:45:57.924280 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.924370 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Oct 8 19:45:57.925618 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.925712 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Oct 8 19:45:57.925794 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:57.925872 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Oct 8 19:45:57.925960 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Oct 8 19:45:57.926034 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Oct 8 19:45:57.926127 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:45:57.926206 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Oct 8 19:45:57.926284 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:45:57.926362 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:45:57.927528 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 19:45:57.927627 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Oct 8 19:45:57.927718 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 19:45:57.927800 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Oct 8 19:45:57.927879 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Oct 8 19:45:57.927962 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 19:45:57.928038 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Oct 8 19:45:57.928130 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 19:45:57.928422 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Oct 8 19:45:57.928557 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 19:45:57.928638 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Oct 8 19:45:57.928712 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:45:57.928793 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:45:57.928874 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Oct 8 19:45:57.928946 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Oct 8 19:45:57.929021 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:45:57.929096 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 8 19:45:57.929168 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:45:57.929238 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:45:57.929313 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 8 19:45:57.929387 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 8 19:45:57.930003 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 8 19:45:57.930092 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 8 19:45:57.930163 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:45:57.930233 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:45:57.930305 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 8 19:45:57.930374 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 8 19:45:57.930835 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 8 19:45:57.930922 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 8 19:45:57.930995 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 8 19:45:57.931066 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Oct 8 19:45:57.931139 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 8 19:45:57.931210 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:45:57.931281 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:45:57.931360 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 19:45:57.932519 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:45:57.932626 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:45:57.932735 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 19:45:57.932804 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:45:57.932867 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:45:57.932936 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 19:45:57.933000 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:45:57.933071 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:45:57.933137 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Oct 8 19:45:57.933200 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:45:57.933265 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Oct 8 19:45:57.933329 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:45:57.933396 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Oct 8 19:45:57.934742 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:45:57.934828 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Oct 8 19:45:57.934893 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:45:57.934962 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Oct 8 19:45:57.935026 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:45:57.935093 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Oct 8 19:45:57.935157 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:45:57.935227 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Oct 8 19:45:57.935294 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:45:57.935360 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Oct 8 19:45:57.935437 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:45:57.936623 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Oct 8 19:45:57.936699 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:45:57.936769 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Oct 8 19:45:57.936840 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Oct 8 19:45:57.936904 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Oct 8 19:45:57.936968 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 19:45:57.937032 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Oct 8 19:45:57.937094 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 19:45:57.937160 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Oct 8 19:45:57.937223 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 19:45:57.937287 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Oct 8 19:45:57.937352 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 8 19:45:57.937417 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Oct 8 19:45:57.938631 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 8 19:45:57.938704 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Oct 8 19:45:57.938769 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 8 19:45:57.938835 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Oct 8 19:45:57.938898 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 8 19:45:57.938964 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Oct 8 19:45:57.939036 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 8 19:45:57.939103 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Oct 8 19:45:57.939186 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Oct 8 19:45:57.939272 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Oct 8 19:45:57.939359 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Oct 8 19:45:57.939435 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:45:57.940240 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Oct 8 19:45:57.940323 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 19:45:57.940395 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 8 19:45:57.940529 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 8 19:45:57.940597 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:45:57.940670 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Oct 8 19:45:57.940735 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 19:45:57.940803 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 8 19:45:57.940867 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 8 19:45:57.940930 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:45:57.941000 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:45:57.941065 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Oct 8 19:45:57.941129 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 19:45:57.941192 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 8 19:45:57.941256 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 8 19:45:57.941318 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:45:57.941388 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:45:57.941465 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 19:45:57.941532 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 8 19:45:57.941595 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 8 19:45:57.941658 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:45:57.941731 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Oct 8 19:45:57.941799 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 19:45:57.941863 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 8 19:45:57.941927 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 8 19:45:57.941991 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:45:57.942063 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Oct 8 19:45:57.942130 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Oct 8 19:45:57.942197 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 19:45:57.942262 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 8 19:45:57.942334 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 8 19:45:57.942398 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:45:57.942494 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Oct 8 19:45:57.942564 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Oct 8 19:45:57.942632 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Oct 8 19:45:57.942698 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 19:45:57.942764 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 8 19:45:57.942827 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 8 19:45:57.942894 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:45:57.942961 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 19:45:57.943024 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 8 19:45:57.943087 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 8 19:45:57.943153 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:45:57.943218 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 19:45:57.943283 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 8 19:45:57.943346 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 8 19:45:57.943413 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:45:57.943531 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:45:57.943589 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:45:57.943644 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:45:57.943712 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 8 19:45:57.943771 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 8 19:45:57.943830 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:45:57.943904 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 8 19:45:57.943962 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 8 19:45:57.944019 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:45:57.944083 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 8 19:45:57.944142 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 8 19:45:57.944253 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:45:57.944331 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 8 19:45:57.944392 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 8 19:45:57.944484 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:45:57.944566 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 8 19:45:57.944649 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 8 19:45:57.944730 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:45:57.944800 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 8 19:45:57.944864 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 8 19:45:57.944923 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:45:57.944992 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 8 19:45:57.945052 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 8 19:45:57.945117 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:45:57.945189 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 8 19:45:57.945248 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 8 19:45:57.945308 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:45:57.945374 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 8 19:45:57.947549 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 8 19:45:57.947648 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:45:57.947666 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:45:57.947675 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:45:57.947683 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:45:57.947691 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:45:57.947699 kernel: iommu: Default domain type: Translated Oct 8 19:45:57.947709 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:45:57.947716 kernel: efivars: Registered efivars operations Oct 8 19:45:57.947724 kernel: vgaarb: loaded Oct 8 19:45:57.947732 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:45:57.947742 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:45:57.947750 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:45:57.947757 kernel: pnp: PnP ACPI init Oct 8 19:45:57.947840 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:45:57.947852 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:45:57.947860 kernel: NET: Registered PF_INET protocol family Oct 8 19:45:57.947868 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:45:57.947876 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:45:57.947886 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:45:57.947895 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:45:57.947902 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:45:57.947910 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:45:57.947918 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:45:57.947926 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:45:57.947934 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:45:57.948009 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 8 19:45:57.948020 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:45:57.948030 kernel: kvm [1]: HYP mode not available Oct 8 19:45:57.948038 kernel: Initialise system trusted keyrings Oct 8 19:45:57.948046 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:45:57.948054 kernel: Key type asymmetric registered Oct 8 19:45:57.948061 kernel: Asymmetric key parser 'x509' registered Oct 8 19:45:57.948069 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:45:57.948077 kernel: io scheduler mq-deadline registered Oct 8 19:45:57.948084 kernel: io scheduler kyber registered Oct 8 19:45:57.948092 kernel: io scheduler bfq registered Oct 8 19:45:57.948102 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:45:57.948186 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 8 19:45:57.948259 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 8 19:45:57.948324 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.948391 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 8 19:45:57.948594 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 8 19:45:57.948663 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.948734 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 8 19:45:57.948798 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 8 19:45:57.948861 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.948927 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 8 19:45:57.948991 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 8 19:45:57.949054 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.949122 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 8 19:45:57.949187 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 8 19:45:57.949251 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.949321 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 8 19:45:57.949385 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 8 19:45:57.949464 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.949534 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 8 19:45:57.949598 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 8 19:45:57.949661 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.949730 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 8 19:45:57.949795 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 8 19:45:57.949862 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.949873 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 8 19:45:57.949945 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 8 19:45:57.950010 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 8 19:45:57.950074 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:57.950084 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:45:57.950092 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:45:57.950100 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:45:57.950173 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Oct 8 19:45:57.950248 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 8 19:45:57.950318 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 8 19:45:57.950329 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:45:57.950337 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:45:57.950404 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 8 19:45:57.950416 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 8 19:45:57.950447 kernel: thunder_xcv, ver 1.0 Oct 8 19:45:57.950458 kernel: thunder_bgx, ver 1.0 Oct 8 19:45:57.950466 kernel: nicpf, ver 1.0 Oct 8 19:45:57.950473 kernel: nicvf, ver 1.0 Oct 8 19:45:57.950550 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:45:57.950612 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:45:57 UTC (1728416757) Oct 8 19:45:57.950622 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:45:57.950631 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:45:57.950639 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:45:57.950649 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:45:57.950657 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:45:57.950665 kernel: Segment Routing with IPv6 Oct 8 19:45:57.950673 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:45:57.950680 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:45:57.950688 kernel: Key type dns_resolver registered Oct 8 19:45:57.950695 kernel: registered taskstats version 1 Oct 8 19:45:57.950703 kernel: Loading compiled-in X.509 certificates Oct 8 19:45:57.950711 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e9e638352c282bfddf5aec6da700ad8191939d05' Oct 8 19:45:57.950720 kernel: Key type .fscrypt registered Oct 8 19:45:57.950727 kernel: Key type fscrypt-provisioning registered Oct 8 19:45:57.950735 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:45:57.950743 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:45:57.950750 kernel: ima: No architecture policies found Oct 8 19:45:57.950758 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:45:57.950766 kernel: clk: Disabling unused clocks Oct 8 19:45:57.950773 kernel: Freeing unused kernel memory: 39360K Oct 8 19:45:57.950781 kernel: Run /init as init process Oct 8 19:45:57.950790 kernel: with arguments: Oct 8 19:45:57.950798 kernel: /init Oct 8 19:45:57.950806 kernel: with environment: Oct 8 19:45:57.950813 kernel: HOME=/ Oct 8 19:45:57.950821 kernel: TERM=linux Oct 8 19:45:57.950828 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:45:57.950838 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:45:57.950848 systemd[1]: Detected virtualization kvm. Oct 8 19:45:57.950857 systemd[1]: Detected architecture arm64. Oct 8 19:45:57.950865 systemd[1]: Running in initrd. Oct 8 19:45:57.950873 systemd[1]: No hostname configured, using default hostname. Oct 8 19:45:57.950881 systemd[1]: Hostname set to . Oct 8 19:45:57.950890 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:45:57.950898 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:45:57.950906 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:45:57.950915 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:45:57.950925 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:45:57.950933 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:45:57.950941 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:45:57.950949 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:45:57.950967 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:45:57.950977 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:45:57.950987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:45:57.950995 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:45:57.951003 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:45:57.951011 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:45:57.951019 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:45:57.951028 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:45:57.951036 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:45:57.951044 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:45:57.951054 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:45:57.951064 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:45:57.951072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:45:57.951080 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:45:57.951089 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:45:57.951097 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:45:57.951105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:45:57.951113 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:45:57.951121 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:45:57.951129 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:45:57.951139 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:45:57.951148 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:45:57.951156 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:57.951164 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:45:57.951172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:45:57.951201 systemd-journald[237]: Collecting audit messages is disabled. Oct 8 19:45:57.951223 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:45:57.951233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:45:57.951242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:45:57.951251 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:45:57.951259 kernel: Bridge firewalling registered Oct 8 19:45:57.951267 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:45:57.951277 systemd-journald[237]: Journal started Oct 8 19:45:57.951295 systemd-journald[237]: Runtime Journal (/run/log/journal/4300c62899274c1da01ac73f4fe02a5b) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:45:57.919051 systemd-modules-load[238]: Inserted module 'overlay' Oct 8 19:45:57.952316 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:45:57.945082 systemd-modules-load[238]: Inserted module 'br_netfilter' Oct 8 19:45:57.953973 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:45:57.954955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:57.970691 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:57.972851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:45:57.976591 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:45:57.981053 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:45:57.997732 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:57.998844 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:45:58.010642 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:45:58.012053 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:45:58.017655 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:45:58.024444 dracut-cmdline[269]: dracut-dracut-053 Oct 8 19:45:58.027710 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:45:58.061020 systemd-resolved[276]: Positive Trust Anchors: Oct 8 19:45:58.061036 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:45:58.061075 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:45:58.069170 systemd-resolved[276]: Defaulting to hostname 'linux'. Oct 8 19:45:58.072044 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:45:58.074291 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:45:58.126475 kernel: SCSI subsystem initialized Oct 8 19:45:58.131462 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:45:58.139478 kernel: iscsi: registered transport (tcp) Oct 8 19:45:58.153565 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:45:58.153738 kernel: QLogic iSCSI HBA Driver Oct 8 19:45:58.199484 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:45:58.208878 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:45:58.228603 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:45:58.228688 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:45:58.231766 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:45:58.285524 kernel: raid6: neonx8 gen() 15767 MB/s Oct 8 19:45:58.302479 kernel: raid6: neonx4 gen() 15673 MB/s Oct 8 19:45:58.319467 kernel: raid6: neonx2 gen() 13215 MB/s Oct 8 19:45:58.336488 kernel: raid6: neonx1 gen() 10476 MB/s Oct 8 19:45:58.353464 kernel: raid6: int64x8 gen() 6968 MB/s Oct 8 19:45:58.370474 kernel: raid6: int64x4 gen() 7353 MB/s Oct 8 19:45:58.387459 kernel: raid6: int64x2 gen() 6136 MB/s Oct 8 19:45:58.404461 kernel: raid6: int64x1 gen() 5061 MB/s Oct 8 19:45:58.404504 kernel: raid6: using algorithm neonx8 gen() 15767 MB/s Oct 8 19:45:58.421461 kernel: raid6: .... xor() 11929 MB/s, rmw enabled Oct 8 19:45:58.421506 kernel: raid6: using neon recovery algorithm Oct 8 19:45:58.426638 kernel: xor: measuring software checksum speed Oct 8 19:45:58.426706 kernel: 8regs : 19783 MB/sec Oct 8 19:45:58.426734 kernel: 32regs : 19641 MB/sec Oct 8 19:45:58.426761 kernel: arm64_neon : 26441 MB/sec Oct 8 19:45:58.426798 kernel: xor: using function: arm64_neon (26441 MB/sec) Oct 8 19:45:58.481531 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:45:58.497252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:45:58.509741 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:45:58.530940 systemd-udevd[454]: Using default interface naming scheme 'v255'. Oct 8 19:45:58.534381 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:45:58.549877 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:45:58.561652 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Oct 8 19:45:58.600699 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:45:58.607648 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:45:58.655131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:45:58.663680 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:45:58.687550 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:45:58.691648 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:45:58.692258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:45:58.694090 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:45:58.704105 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:45:58.721521 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:45:58.764762 kernel: scsi host0: Virtio SCSI HBA Oct 8 19:45:58.767218 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:45:58.767300 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 19:45:58.779475 kernel: ACPI: bus type USB registered Oct 8 19:45:58.779529 kernel: usbcore: registered new interface driver usbfs Oct 8 19:45:58.780480 kernel: usbcore: registered new interface driver hub Oct 8 19:45:58.780508 kernel: usbcore: registered new device driver usb Oct 8 19:45:58.792814 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:45:58.792939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:58.841133 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:58.842113 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:45:58.842337 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:58.843666 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:58.853877 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 8 19:45:58.856748 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:58.862493 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:45:58.862708 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 19:45:58.862802 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 19:45:58.863727 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:45:58.863860 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 19:45:58.865020 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 19:45:58.865177 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 8 19:45:58.866583 kernel: hub 1-0:1.0: USB hub found Oct 8 19:45:58.866767 kernel: hub 1-0:1.0: 4 ports detected Oct 8 19:45:58.866860 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 19:45:58.867454 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:45:58.868500 kernel: hub 2-0:1.0: USB hub found Oct 8 19:45:58.868653 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:45:58.868746 kernel: hub 2-0:1.0: 4 ports detected Oct 8 19:45:58.884364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:58.893761 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:58.908838 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 8 19:45:58.909948 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 19:45:58.910048 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 8 19:45:58.910130 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 8 19:45:58.910208 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 19:45:58.916457 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:45:58.916515 kernel: GPT:17805311 != 80003071 Oct 8 19:45:58.916526 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:45:58.916543 kernel: GPT:17805311 != 80003071 Oct 8 19:45:58.916553 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:45:58.919506 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:45:58.919559 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 8 19:45:58.921142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:58.959032 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (502) Oct 8 19:45:58.963517 kernel: BTRFS: device fsid ad786f33-c7c5-429e-95f9-4ea457bd3916 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (526) Oct 8 19:45:58.973978 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 19:45:58.980083 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 19:45:58.986077 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:45:58.995537 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 19:45:58.996149 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 19:45:59.001602 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:45:59.025166 disk-uuid[575]: Primary Header is updated. Oct 8 19:45:59.025166 disk-uuid[575]: Secondary Entries is updated. Oct 8 19:45:59.025166 disk-uuid[575]: Secondary Header is updated. Oct 8 19:45:59.031446 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:45:59.103463 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 19:45:59.243662 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 8 19:45:59.243741 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 19:45:59.244094 kernel: usbcore: registered new interface driver usbhid Oct 8 19:45:59.244120 kernel: usbhid: USB HID core driver Oct 8 19:45:59.346538 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 8 19:45:59.477468 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 8 19:45:59.530615 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 8 19:46:00.052299 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:46:00.052517 disk-uuid[577]: The operation has completed successfully. Oct 8 19:46:00.090468 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:46:00.090556 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:46:00.118783 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:46:00.123854 sh[595]: Success Oct 8 19:46:00.137592 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:46:00.186886 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:46:00.195362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:46:00.197456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:46:00.229923 kernel: BTRFS info (device dm-0): first mount of filesystem ad786f33-c7c5-429e-95f9-4ea457bd3916 Oct 8 19:46:00.230007 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:00.230039 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:46:00.231458 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:46:00.231513 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:46:00.236443 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:46:00.239361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:46:00.240110 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:46:00.252758 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:46:00.255680 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:46:00.273089 kernel: BTRFS info (device sda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:46:00.273155 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:00.273171 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:46:00.277455 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:46:00.277520 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:46:00.288378 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:46:00.288915 kernel: BTRFS info (device sda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:46:00.294283 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:46:00.301713 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:46:00.381957 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:46:00.391695 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:46:00.425103 systemd-networkd[781]: lo: Link UP Oct 8 19:46:00.425115 systemd-networkd[781]: lo: Gained carrier Oct 8 19:46:00.426741 systemd-networkd[781]: Enumeration completed Oct 8 19:46:00.426932 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:46:00.428071 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:00.428075 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:46:00.428964 systemd[1]: Reached target network.target - Network. Oct 8 19:46:00.429911 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:00.429914 systemd-networkd[781]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:46:00.431058 systemd-networkd[781]: eth0: Link UP Oct 8 19:46:00.431061 systemd-networkd[781]: eth0: Gained carrier Oct 8 19:46:00.431069 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:00.445862 systemd-networkd[781]: eth1: Link UP Oct 8 19:46:00.445869 systemd-networkd[781]: eth1: Gained carrier Oct 8 19:46:00.445886 systemd-networkd[781]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:00.450567 ignition[689]: Ignition 2.19.0 Oct 8 19:46:00.450573 ignition[689]: Stage: fetch-offline Oct 8 19:46:00.450610 ignition[689]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:00.450623 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:46:00.452307 ignition[689]: parsed url from cmdline: "" Oct 8 19:46:00.452311 ignition[689]: no config URL provided Oct 8 19:46:00.452318 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:46:00.452332 ignition[689]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:46:00.452338 ignition[689]: failed to fetch config: resource requires networking Oct 8 19:46:00.455313 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:46:00.452603 ignition[689]: Ignition finished successfully Oct 8 19:46:00.461667 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:46:00.471508 systemd-networkd[781]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:46:00.474663 ignition[785]: Ignition 2.19.0 Oct 8 19:46:00.474677 ignition[785]: Stage: fetch Oct 8 19:46:00.474848 ignition[785]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:00.474858 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:46:00.474958 ignition[785]: parsed url from cmdline: "" Oct 8 19:46:00.474962 ignition[785]: no config URL provided Oct 8 19:46:00.474966 ignition[785]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:46:00.474974 ignition[785]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:46:00.474998 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 19:46:00.475534 ignition[785]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 19:46:00.573543 systemd-networkd[781]: eth0: DHCPv4 address 188.245.175.188/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:46:00.675983 ignition[785]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 19:46:00.682005 ignition[785]: GET result: OK Oct 8 19:46:00.682226 ignition[785]: parsing config with SHA512: 3655e6d5e4105b2acebb38842928f11cd969ba7197b168846d95a03ef83ffa3768e08463bdb0f83cacae73575ff4a08cb8dbc82bd059d45ade14ebe289357157 Oct 8 19:46:00.690470 unknown[785]: fetched base config from "system" Oct 8 19:46:00.690483 unknown[785]: fetched base config from "system" Oct 8 19:46:00.691524 ignition[785]: fetch: fetch complete Oct 8 19:46:00.690489 unknown[785]: fetched user config from "hetzner" Oct 8 19:46:00.691530 ignition[785]: fetch: fetch passed Oct 8 19:46:00.691591 ignition[785]: Ignition finished successfully Oct 8 19:46:00.694504 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:46:00.707706 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:46:00.722444 ignition[792]: Ignition 2.19.0 Oct 8 19:46:00.723052 ignition[792]: Stage: kargs Oct 8 19:46:00.723269 ignition[792]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:00.723280 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:46:00.724355 ignition[792]: kargs: kargs passed Oct 8 19:46:00.724420 ignition[792]: Ignition finished successfully Oct 8 19:46:00.727208 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:46:00.733620 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:46:00.747874 ignition[798]: Ignition 2.19.0 Oct 8 19:46:00.747888 ignition[798]: Stage: disks Oct 8 19:46:00.748062 ignition[798]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:00.748071 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:46:00.748978 ignition[798]: disks: disks passed Oct 8 19:46:00.749032 ignition[798]: Ignition finished successfully Oct 8 19:46:00.751029 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:46:00.751886 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:46:00.752748 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:46:00.753806 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:46:00.754747 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:46:00.755760 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:46:00.761695 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:46:00.778019 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 19:46:00.783533 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:46:00.788703 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:46:00.833746 kernel: EXT4-fs (sda9): mounted filesystem 833c86f3-93dd-4526-bb43-c7809dac8e51 r/w with ordered data mode. Quota mode: none. Oct 8 19:46:00.834725 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:46:00.836183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:46:00.842603 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:46:00.846584 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:46:00.848631 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 19:46:00.851538 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:46:00.851591 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:46:00.859999 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:46:00.863879 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) Oct 8 19:46:00.869478 kernel: BTRFS info (device sda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:46:00.869539 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:00.869551 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:46:00.870958 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:46:00.876583 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:46:00.876649 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:46:00.882934 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:46:00.925830 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:46:00.930352 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:46:00.937477 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:46:00.941171 coreos-metadata[816]: Oct 08 19:46:00.941 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 19:46:00.942765 coreos-metadata[816]: Oct 08 19:46:00.942 INFO Fetch successful Oct 8 19:46:00.942765 coreos-metadata[816]: Oct 08 19:46:00.942 INFO wrote hostname ci-4081-1-0-2-870ec424ae to /sysroot/etc/hostname Oct 8 19:46:00.945412 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:46:00.945921 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:46:01.057997 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:46:01.063566 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:46:01.067628 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:46:01.076495 kernel: BTRFS info (device sda6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:46:01.104989 ignition[931]: INFO : Ignition 2.19.0 Oct 8 19:46:01.104989 ignition[931]: INFO : Stage: mount Oct 8 19:46:01.107999 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:01.107999 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:46:01.107999 ignition[931]: INFO : mount: mount passed Oct 8 19:46:01.108509 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:46:01.112028 ignition[931]: INFO : Ignition finished successfully Oct 8 19:46:01.109412 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:46:01.115604 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:46:01.230840 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:46:01.239677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:46:01.252613 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Oct 8 19:46:01.253947 kernel: BTRFS info (device sda6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:46:01.253992 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:46:01.254013 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:46:01.257470 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:46:01.257525 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:46:01.259142 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:46:01.285237 ignition[959]: INFO : Ignition 2.19.0 Oct 8 19:46:01.285237 ignition[959]: INFO : Stage: files Oct 8 19:46:01.286646 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:01.286646 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:46:01.288449 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:46:01.289263 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:46:01.289263 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:46:01.292589 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:46:01.293534 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:46:01.293534 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:46:01.293118 unknown[959]: wrote ssh authorized keys file for user: core Oct 8 19:46:01.296034 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:46:01.296034 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:46:01.361639 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:46:01.477046 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:46:01.478306 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:46:01.489607 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Oct 8 19:46:01.796832 systemd-networkd[781]: eth0: Gained IPv6LL Oct 8 19:46:02.057490 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:46:02.377783 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:46:02.377783 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:46:02.381756 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:46:02.381756 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:46:02.381756 ignition[959]: INFO : files: files passed Oct 8 19:46:02.381756 ignition[959]: INFO : Ignition finished successfully Oct 8 19:46:02.381601 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:46:02.389734 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:46:02.394650 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:46:02.398000 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:46:02.398096 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:46:02.408219 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:46:02.408219 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:46:02.411481 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:46:02.414515 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:46:02.415470 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:46:02.423813 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:46:02.437651 systemd-networkd[781]: eth1: Gained IPv6LL Oct 8 19:46:02.466816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:46:02.466968 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:46:02.468806 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:46:02.470255 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:46:02.471528 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:46:02.477938 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:46:02.492492 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:46:02.499635 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:46:02.511742 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:46:02.513396 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:46:02.514069 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:46:02.515338 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:46:02.515535 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:46:02.517354 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:46:02.518818 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:46:02.520026 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:46:02.521080 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:46:02.522266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:46:02.523337 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:46:02.524552 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:46:02.525870 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:46:02.526999 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:46:02.527991 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:46:02.528893 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:46:02.529108 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:46:02.530935 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:46:02.531762 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:46:02.532882 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:46:02.533944 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:46:02.534703 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:46:02.534880 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:46:02.536371 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:46:02.536553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:46:02.537556 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:46:02.537692 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:46:02.538507 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 19:46:02.538640 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:46:02.551327 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:46:02.551882 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:46:02.552069 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:46:02.554112 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:46:02.556260 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:46:02.556842 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:46:02.560209 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:46:02.560715 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:46:02.572745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:46:02.573632 ignition[1012]: INFO : Ignition 2.19.0 Oct 8 19:46:02.575029 ignition[1012]: INFO : Stage: umount Oct 8 19:46:02.575029 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:46:02.575029 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:46:02.575029 ignition[1012]: INFO : umount: umount passed Oct 8 19:46:02.574362 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:46:02.581488 ignition[1012]: INFO : Ignition finished successfully Oct 8 19:46:02.578078 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:46:02.578190 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:46:02.580724 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:46:02.580774 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:46:02.581975 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:46:02.582013 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:46:02.582768 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:46:02.582801 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:46:02.583611 systemd[1]: Stopped target network.target - Network. Oct 8 19:46:02.584366 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:46:02.584416 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:46:02.585310 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:46:02.587156 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:46:02.594397 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:46:02.595023 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:46:02.595930 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:46:02.596977 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:46:02.597024 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:46:02.597761 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:46:02.597793 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:46:02.598829 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:46:02.598878 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:46:02.600200 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:46:02.600240 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:46:02.602049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:46:02.603112 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:46:02.604817 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:46:02.615834 systemd-networkd[781]: eth0: DHCPv6 lease lost Oct 8 19:46:02.616901 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:46:02.617017 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:46:02.619492 systemd-networkd[781]: eth1: DHCPv6 lease lost Oct 8 19:46:02.621863 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:46:02.622004 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:46:02.625861 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:46:02.625938 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:46:02.633549 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:46:02.634020 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:46:02.634086 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:46:02.635041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:46:02.635079 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:46:02.636019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:46:02.636057 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:46:02.636764 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:46:02.636806 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:46:02.638164 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:46:02.646721 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:46:02.646841 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:46:02.653964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:46:02.654088 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:46:02.655242 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:46:02.656464 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:46:02.657690 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:46:02.657764 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:46:02.658330 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:46:02.658358 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:46:02.659186 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:46:02.659230 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:46:02.660655 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:46:02.660707 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:46:02.662648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:46:02.662707 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:46:02.674694 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:46:02.675357 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:46:02.675443 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:46:02.680586 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:46:02.680668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:46:02.685088 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:46:02.685196 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:46:02.689035 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:46:02.689145 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:46:02.690595 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:46:02.696622 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:46:02.707229 systemd[1]: Switching root. Oct 8 19:46:02.748894 systemd-journald[237]: Journal stopped Oct 8 19:46:03.566667 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 8 19:46:03.566741 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:46:03.566754 kernel: SELinux: policy capability open_perms=1 Oct 8 19:46:03.566768 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:46:03.566778 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:46:03.566790 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:46:03.566801 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:46:03.566812 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:46:03.566822 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:46:03.566831 kernel: audit: type=1403 audit(1728416762.871:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:46:03.566842 systemd[1]: Successfully loaded SELinux policy in 34.173ms. Oct 8 19:46:03.566862 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.896ms. Oct 8 19:46:03.566873 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:46:03.566884 systemd[1]: Detected virtualization kvm. Oct 8 19:46:03.566897 systemd[1]: Detected architecture arm64. Oct 8 19:46:03.566907 systemd[1]: Detected first boot. Oct 8 19:46:03.566917 systemd[1]: Hostname set to . Oct 8 19:46:03.566928 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:46:03.566938 zram_generator::config[1054]: No configuration found. Oct 8 19:46:03.566952 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:46:03.566962 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:46:03.566974 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:46:03.566986 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:46:03.566997 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:46:03.567008 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:46:03.567018 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:46:03.567029 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:46:03.567039 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:46:03.567050 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:46:03.567061 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:46:03.567073 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:46:03.567084 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:46:03.567095 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:46:03.567106 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:46:03.567117 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:46:03.567128 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:46:03.567141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:46:03.567152 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:46:03.567162 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:46:03.567174 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:46:03.567185 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:46:03.567195 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:46:03.567206 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:46:03.567216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:46:03.567230 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:46:03.567243 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:46:03.567254 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:46:03.567265 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:46:03.567275 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:46:03.567286 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:46:03.567297 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:46:03.567308 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:46:03.567319 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:46:03.567329 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:46:03.567341 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:46:03.567351 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:46:03.567361 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:46:03.567373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:46:03.567384 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:46:03.567395 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:46:03.567406 systemd[1]: Reached target machines.target - Containers. Oct 8 19:46:03.567416 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:46:03.567445 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:03.567459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:46:03.567470 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:46:03.567480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:46:03.567494 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:46:03.567506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:46:03.567518 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:46:03.567529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:46:03.567540 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:46:03.567551 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:46:03.567562 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:46:03.567572 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:46:03.567583 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:46:03.567593 kernel: fuse: init (API version 7.39) Oct 8 19:46:03.567603 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:46:03.567619 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:46:03.567631 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:46:03.567642 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:46:03.567652 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:46:03.567663 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:46:03.567673 systemd[1]: Stopped verity-setup.service. Oct 8 19:46:03.567683 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:46:03.567693 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:46:03.567735 systemd-journald[1117]: Collecting audit messages is disabled. Oct 8 19:46:03.567763 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:46:03.567774 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:46:03.567785 systemd-journald[1117]: Journal started Oct 8 19:46:03.567810 systemd-journald[1117]: Runtime Journal (/run/log/journal/4300c62899274c1da01ac73f4fe02a5b) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:46:03.323660 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:46:03.344720 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 19:46:03.345268 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:46:03.571510 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:46:03.571540 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:46:03.572241 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:46:03.573494 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:46:03.574350 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:46:03.575084 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:46:03.584229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:46:03.584437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:46:03.586635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:46:03.587547 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:46:03.588622 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:46:03.588771 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:46:03.594550 kernel: ACPI: bus type drm_connector registered Oct 8 19:46:03.595272 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:46:03.603882 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:46:03.604043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:46:03.609820 kernel: loop: module loaded Oct 8 19:46:03.611149 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:46:03.611687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:46:03.621601 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:46:03.629568 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:46:03.631296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:46:03.636089 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:46:03.639819 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:46:03.641834 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:46:03.642698 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:46:03.647701 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:46:03.648457 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:46:03.652062 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:46:03.655715 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:46:03.655757 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:46:03.657906 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:46:03.662700 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:46:03.669689 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:46:03.670317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:03.673641 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:46:03.679933 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:46:03.681351 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:46:03.683668 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:46:03.689639 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:46:03.703675 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:46:03.708464 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:46:03.709451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:46:03.710380 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:46:03.717817 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:46:03.720562 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:46:03.723192 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:46:03.729798 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:46:03.742540 systemd-journald[1117]: Time spent on flushing to /var/log/journal/4300c62899274c1da01ac73f4fe02a5b is 51.903ms for 1129 entries. Oct 8 19:46:03.742540 systemd-journald[1117]: System Journal (/var/log/journal/4300c62899274c1da01ac73f4fe02a5b) is 8.0M, max 584.8M, 576.8M free. Oct 8 19:46:03.804682 systemd-journald[1117]: Received client request to flush runtime journal. Oct 8 19:46:03.804724 kernel: loop0: detected capacity change from 0 to 189592 Oct 8 19:46:03.804740 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:46:03.759028 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:46:03.807524 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:46:03.813494 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:46:03.819591 kernel: loop1: detected capacity change from 0 to 114328 Oct 8 19:46:03.823289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:46:03.826275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:46:03.828298 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:46:03.861456 kernel: loop2: detected capacity change from 0 to 8 Oct 8 19:46:03.864195 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Oct 8 19:46:03.864583 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Oct 8 19:46:03.870225 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:46:03.887464 kernel: loop3: detected capacity change from 0 to 114432 Oct 8 19:46:03.931480 kernel: loop4: detected capacity change from 0 to 189592 Oct 8 19:46:03.960648 kernel: loop5: detected capacity change from 0 to 114328 Oct 8 19:46:03.983546 kernel: loop6: detected capacity change from 0 to 8 Oct 8 19:46:03.985451 kernel: loop7: detected capacity change from 0 to 114432 Oct 8 19:46:03.995867 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 19:46:03.997892 (sd-merge)[1194]: Merged extensions into '/usr'. Oct 8 19:46:04.005670 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:46:04.005689 systemd[1]: Reloading... Oct 8 19:46:04.116535 zram_generator::config[1217]: No configuration found. Oct 8 19:46:04.185668 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:46:04.266492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:46:04.312341 systemd[1]: Reloading finished in 305 ms. Oct 8 19:46:04.339473 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:46:04.342033 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:46:04.354133 systemd[1]: Starting ensure-sysext.service... Oct 8 19:46:04.369663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:46:04.372335 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:46:04.372351 systemd[1]: Reloading... Oct 8 19:46:04.405189 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:46:04.407595 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:46:04.408384 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:46:04.408651 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Oct 8 19:46:04.408697 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Oct 8 19:46:04.411930 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:46:04.411947 systemd-tmpfiles[1258]: Skipping /boot Oct 8 19:46:04.421672 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:46:04.421688 systemd-tmpfiles[1258]: Skipping /boot Oct 8 19:46:04.443743 zram_generator::config[1280]: No configuration found. Oct 8 19:46:04.562598 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:46:04.607391 systemd[1]: Reloading finished in 234 ms. Oct 8 19:46:04.630104 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:46:04.631190 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:46:04.650125 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:46:04.654655 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:46:04.657626 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:46:04.662686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:46:04.667265 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:46:04.670642 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:46:04.677822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:04.681791 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:46:04.685729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:46:04.690739 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:46:04.692603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:04.694280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:04.694447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:04.698135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:46:04.706723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:46:04.707752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:46:04.721557 systemd[1]: Finished ensure-sysext.service. Oct 8 19:46:04.723713 systemd-udevd[1329]: Using default interface naming scheme 'v255'. Oct 8 19:46:04.737386 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:46:04.745711 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:46:04.746542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:46:04.755066 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:46:04.755901 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:46:04.756055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:46:04.764071 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:46:04.768963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:46:04.769990 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:46:04.795530 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:46:04.796244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:46:04.796953 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:46:04.797732 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:46:04.807776 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:46:04.817845 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:46:04.825811 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:46:04.842645 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:46:04.843512 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:46:04.844992 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:46:04.865677 augenrules[1381]: No rules Oct 8 19:46:04.869500 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:46:04.874737 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:46:04.888707 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:46:04.907608 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:46:04.937466 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1352) Oct 8 19:46:04.940450 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1352) Oct 8 19:46:05.031907 systemd-networkd[1364]: lo: Link UP Oct 8 19:46:05.031917 systemd-networkd[1364]: lo: Gained carrier Oct 8 19:46:05.033672 systemd-networkd[1364]: Enumeration completed Oct 8 19:46:05.033782 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:46:05.039622 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:46:05.040587 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.040596 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:46:05.042499 systemd-networkd[1364]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.042510 systemd-networkd[1364]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:46:05.043345 systemd-networkd[1364]: eth0: Link UP Oct 8 19:46:05.043355 systemd-networkd[1364]: eth0: Gained carrier Oct 8 19:46:05.043371 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.052452 systemd-networkd[1364]: eth1: Link UP Oct 8 19:46:05.052461 systemd-networkd[1364]: eth1: Gained carrier Oct 8 19:46:05.052483 systemd-networkd[1364]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.064528 systemd-resolved[1328]: Positive Trust Anchors: Oct 8 19:46:05.064547 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:46:05.064583 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:46:05.070056 systemd-resolved[1328]: Using system hostname 'ci-4081-1-0-2-870ec424ae'. Oct 8 19:46:05.079653 systemd-networkd[1364]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:46:05.080697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:46:05.082288 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:46:05.085066 systemd[1]: Reached target network.target - Network. Oct 8 19:46:05.086722 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:46:05.088552 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:46:05.088616 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Oct 8 19:46:05.091163 systemd-networkd[1364]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.115091 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:46:05.141474 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1352) Oct 8 19:46:05.155570 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:46:05.169599 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:46:05.170230 systemd-networkd[1364]: eth0: DHCPv4 address 188.245.175.188/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:46:05.172388 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Oct 8 19:46:05.174676 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Oct 8 19:46:05.176272 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:46:05.214992 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:46:05.223743 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 8 19:46:05.223839 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 19:46:05.223854 kernel: [drm] features: -context_init Oct 8 19:46:05.228008 kernel: [drm] number of scanouts: 1 Oct 8 19:46:05.228071 kernel: [drm] number of cap sets: 0 Oct 8 19:46:05.237610 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 19:46:05.257749 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 19:46:05.264170 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 19:46:05.262078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:46:05.277570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:46:05.278576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:46:05.286813 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:46:05.358322 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:46:05.416018 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:46:05.432755 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:46:05.451580 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:46:05.480377 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:46:05.482833 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:46:05.484150 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:46:05.485618 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:46:05.486929 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:46:05.488717 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:46:05.489354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:46:05.490108 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:46:05.490738 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:46:05.490769 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:46:05.491201 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:46:05.495585 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:46:05.498279 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:46:05.507031 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:46:05.509362 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:46:05.510996 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:46:05.511850 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:46:05.512491 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:46:05.513119 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:46:05.513149 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:46:05.514574 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:46:05.519691 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:46:05.525717 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:46:05.530380 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:46:05.534068 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:46:05.535582 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:46:05.539689 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:46:05.542918 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:46:05.542812 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:46:05.544814 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:46:05.550706 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:46:05.554884 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:46:05.557310 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:46:05.557892 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:46:05.576199 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:46:05.577795 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:46:05.631165 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:46:05.632179 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:46:05.633493 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:46:05.640453 jq[1439]: false Oct 8 19:46:05.643805 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:46:05.648308 jq[1449]: true Oct 8 19:46:05.647551 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:46:05.648068 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:46:05.657021 update_engine[1448]: I20241008 19:46:05.654619 1448 main.cc:92] Flatcar Update Engine starting Oct 8 19:46:05.664926 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:46:05.665118 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:46:05.682727 coreos-metadata[1437]: Oct 08 19:46:05.682 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found loop4 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found loop5 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found loop6 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found loop7 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda1 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda2 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda3 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found usr Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda4 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda6 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda7 Oct 8 19:46:05.686087 extend-filesystems[1440]: Found sda9 Oct 8 19:46:05.686087 extend-filesystems[1440]: Checking size of /dev/sda9 Oct 8 19:46:05.727513 tar[1455]: linux-arm64/helm Oct 8 19:46:05.700918 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:46:05.727839 coreos-metadata[1437]: Oct 08 19:46:05.688 INFO Fetch successful Oct 8 19:46:05.727839 coreos-metadata[1437]: Oct 08 19:46:05.688 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 19:46:05.727839 coreos-metadata[1437]: Oct 08 19:46:05.694 INFO Fetch successful Oct 8 19:46:05.700235 dbus-daemon[1438]: [system] SELinux support is enabled Oct 8 19:46:05.704203 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:46:05.731621 jq[1470]: true Oct 8 19:46:05.704234 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:46:05.733777 update_engine[1448]: I20241008 19:46:05.731678 1448 update_check_scheduler.cc:74] Next update check in 8m51s Oct 8 19:46:05.704976 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:46:05.704995 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:46:05.729799 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:46:05.742804 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:46:05.748718 extend-filesystems[1440]: Resized partition /dev/sda9 Oct 8 19:46:05.777565 extend-filesystems[1485]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:46:05.787456 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 19:46:05.824909 systemd-logind[1445]: New seat seat0. Oct 8 19:46:05.829270 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:46:05.829296 systemd-logind[1445]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 8 19:46:05.830229 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:46:05.887788 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:46:05.888729 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:46:05.917509 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1369) Oct 8 19:46:05.945290 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 19:46:05.952352 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:46:05.957095 extend-filesystems[1485]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 19:46:05.957095 extend-filesystems[1485]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 19:46:05.957095 extend-filesystems[1485]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 19:46:05.964654 extend-filesystems[1440]: Resized filesystem in /dev/sda9 Oct 8 19:46:05.964654 extend-filesystems[1440]: Found sr0 Oct 8 19:46:05.959669 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:46:05.965851 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:46:05.959850 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:46:05.965535 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:46:05.980873 systemd[1]: Starting sshkeys.service... Oct 8 19:46:06.005732 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:46:06.014782 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:46:06.079398 coreos-metadata[1520]: Oct 08 19:46:06.079 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 19:46:06.081132 coreos-metadata[1520]: Oct 08 19:46:06.080 INFO Fetch successful Oct 8 19:46:06.083636 unknown[1520]: wrote ssh authorized keys file for user: core Oct 8 19:46:06.125331 update-ssh-keys[1523]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:46:06.126885 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:46:06.136014 systemd[1]: Finished sshkeys.service. Oct 8 19:46:06.218199 containerd[1465]: time="2024-10-08T19:46:06.218030120Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.270103680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.272503200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.272546600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.272565600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.272738280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.272763560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.272824600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.272838360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.273030720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.273048200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273445 containerd[1465]: time="2024-10-08T19:46:06.273063280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273737 containerd[1465]: time="2024-10-08T19:46:06.273074640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273737 containerd[1465]: time="2024-10-08T19:46:06.273149600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273737 containerd[1465]: time="2024-10-08T19:46:06.273354160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273737 containerd[1465]: time="2024-10-08T19:46:06.273523000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:46:06.273737 containerd[1465]: time="2024-10-08T19:46:06.273542680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:46:06.273737 containerd[1465]: time="2024-10-08T19:46:06.273631120Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:46:06.273737 containerd[1465]: time="2024-10-08T19:46:06.273677600Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:46:06.278349 containerd[1465]: time="2024-10-08T19:46:06.278300040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:46:06.278499 containerd[1465]: time="2024-10-08T19:46:06.278473640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:46:06.278540 containerd[1465]: time="2024-10-08T19:46:06.278505120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:46:06.278596 containerd[1465]: time="2024-10-08T19:46:06.278578720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:46:06.278620 containerd[1465]: time="2024-10-08T19:46:06.278604440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:46:06.278798 containerd[1465]: time="2024-10-08T19:46:06.278777080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:46:06.279102 containerd[1465]: time="2024-10-08T19:46:06.279073600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:46:06.279220 containerd[1465]: time="2024-10-08T19:46:06.279198960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:46:06.279247 containerd[1465]: time="2024-10-08T19:46:06.279223280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:46:06.279247 containerd[1465]: time="2024-10-08T19:46:06.279239560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:46:06.279288 containerd[1465]: time="2024-10-08T19:46:06.279256520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279288 containerd[1465]: time="2024-10-08T19:46:06.279272280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279325 containerd[1465]: time="2024-10-08T19:46:06.279286440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279325 containerd[1465]: time="2024-10-08T19:46:06.279302320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279325 containerd[1465]: time="2024-10-08T19:46:06.279318400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279379 containerd[1465]: time="2024-10-08T19:46:06.279333240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279379 containerd[1465]: time="2024-10-08T19:46:06.279347200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279379 containerd[1465]: time="2024-10-08T19:46:06.279361480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:46:06.279446 containerd[1465]: time="2024-10-08T19:46:06.279384680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279446 containerd[1465]: time="2024-10-08T19:46:06.279401040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279446 containerd[1465]: time="2024-10-08T19:46:06.279415360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279509 containerd[1465]: time="2024-10-08T19:46:06.279450200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279509 containerd[1465]: time="2024-10-08T19:46:06.279465680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279509 containerd[1465]: time="2024-10-08T19:46:06.279480800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279509 containerd[1465]: time="2024-10-08T19:46:06.279505320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279575 containerd[1465]: time="2024-10-08T19:46:06.279521480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279575 containerd[1465]: time="2024-10-08T19:46:06.279537920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279575 containerd[1465]: time="2024-10-08T19:46:06.279557960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279575 containerd[1465]: time="2024-10-08T19:46:06.279572480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279647 containerd[1465]: time="2024-10-08T19:46:06.279586800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279647 containerd[1465]: time="2024-10-08T19:46:06.279602440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279647 containerd[1465]: time="2024-10-08T19:46:06.279621280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:46:06.279702 containerd[1465]: time="2024-10-08T19:46:06.279648280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279702 containerd[1465]: time="2024-10-08T19:46:06.279663680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.279702 containerd[1465]: time="2024-10-08T19:46:06.279677160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:46:06.279832 containerd[1465]: time="2024-10-08T19:46:06.279811200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:46:06.279859 containerd[1465]: time="2024-10-08T19:46:06.279839120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:46:06.280066 containerd[1465]: time="2024-10-08T19:46:06.279852640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:46:06.280145 containerd[1465]: time="2024-10-08T19:46:06.280072840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:46:06.280145 containerd[1465]: time="2024-10-08T19:46:06.280087400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.280145 containerd[1465]: time="2024-10-08T19:46:06.280116240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:46:06.280145 containerd[1465]: time="2024-10-08T19:46:06.280130280Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:46:06.280145 containerd[1465]: time="2024-10-08T19:46:06.280142200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:46:06.280687 containerd[1465]: time="2024-10-08T19:46:06.280615920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:46:06.280818 containerd[1465]: time="2024-10-08T19:46:06.280695040Z" level=info msg="Connect containerd service" Oct 8 19:46:06.280818 containerd[1465]: time="2024-10-08T19:46:06.280738560Z" level=info msg="using legacy CRI server" Oct 8 19:46:06.280818 containerd[1465]: time="2024-10-08T19:46:06.280746600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:46:06.280875 containerd[1465]: time="2024-10-08T19:46:06.280843000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:46:06.282430 containerd[1465]: time="2024-10-08T19:46:06.281775720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:46:06.282430 containerd[1465]: time="2024-10-08T19:46:06.281983360Z" level=info msg="Start subscribing containerd event" Oct 8 19:46:06.282430 containerd[1465]: time="2024-10-08T19:46:06.282089080Z" level=info msg="Start recovering state" Oct 8 19:46:06.282430 containerd[1465]: time="2024-10-08T19:46:06.282164840Z" level=info msg="Start event monitor" Oct 8 19:46:06.282430 containerd[1465]: time="2024-10-08T19:46:06.282178000Z" level=info msg="Start snapshots syncer" Oct 8 19:46:06.282430 containerd[1465]: time="2024-10-08T19:46:06.282187800Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:46:06.282430 containerd[1465]: time="2024-10-08T19:46:06.282195840Z" level=info msg="Start streaming server" Oct 8 19:46:06.283921 containerd[1465]: time="2024-10-08T19:46:06.283236520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:46:06.283921 containerd[1465]: time="2024-10-08T19:46:06.283483240Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:46:06.283690 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:46:06.288437 containerd[1465]: time="2024-10-08T19:46:06.286038160Z" level=info msg="containerd successfully booted in 0.070763s" Oct 8 19:46:06.340643 systemd-networkd[1364]: eth1: Gained IPv6LL Oct 8 19:46:06.341701 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Oct 8 19:46:06.349560 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:46:06.351269 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:46:06.365678 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:06.368723 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:46:06.430726 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:46:06.509908 tar[1455]: linux-arm64/LICENSE Oct 8 19:46:06.510031 tar[1455]: linux-arm64/README.md Oct 8 19:46:06.522203 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:46:06.788621 systemd-networkd[1364]: eth0: Gained IPv6LL Oct 8 19:46:06.791591 systemd-timesyncd[1341]: Network configuration changed, trying to establish connection. Oct 8 19:46:07.040794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:07.045936 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:07.182160 sshd_keygen[1469]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:46:07.207251 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:46:07.215493 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:46:07.223068 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:46:07.223352 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:46:07.230839 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:46:07.242511 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:46:07.248816 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:46:07.257108 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:46:07.257875 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:46:07.258395 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:46:07.260801 systemd[1]: Startup finished in 768ms (kernel) + 5.164s (initrd) + 4.426s (userspace) = 10.360s. Oct 8 19:46:07.585965 kubelet[1550]: E1008 19:46:07.585880 1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:07.589129 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:07.589474 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:17.725615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:46:17.731764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:17.848385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:17.862774 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:17.906805 kubelet[1587]: E1008 19:46:17.906739 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:17.909790 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:17.909926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:27.975784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:46:27.986781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:28.113834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:28.133854 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:28.181135 kubelet[1602]: E1008 19:46:28.181036 1602 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:28.184971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:28.185239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:36.993725 systemd-timesyncd[1341]: Contacted time server 62.108.36.235:123 (2.flatcar.pool.ntp.org). Oct 8 19:46:36.993823 systemd-timesyncd[1341]: Initial clock synchronization to Tue 2024-10-08 19:46:36.837481 UTC. Oct 8 19:46:38.225260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:46:38.234712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:38.350264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:38.354808 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:38.394417 kubelet[1617]: E1008 19:46:38.394359 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:38.396343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:38.396593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:48.475770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 19:46:48.485711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:48.610703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:48.618985 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:48.664260 kubelet[1632]: E1008 19:46:48.664197 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:48.667350 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:48.667590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:50.946468 update_engine[1448]: I20241008 19:46:50.945513 1448 update_attempter.cc:509] Updating boot flags... Oct 8 19:46:50.994458 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1648) Oct 8 19:46:51.056578 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1651) Oct 8 19:46:58.725502 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 19:46:58.735732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:58.847874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:58.862843 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:58.919721 kubelet[1664]: E1008 19:46:58.919626 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:58.923125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:58.923362 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:08.975925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 19:47:08.983765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:09.099297 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:09.114832 (kubelet)[1680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:09.162356 kubelet[1680]: E1008 19:47:09.162263 1680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:09.164890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:09.165062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:19.225621 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 19:47:19.234741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:19.356093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:19.367844 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:19.414666 kubelet[1694]: E1008 19:47:19.414607 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:19.416685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:19.416816 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:29.475322 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 19:47:29.489717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:29.602390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:29.607324 (kubelet)[1708]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:29.648864 kubelet[1708]: E1008 19:47:29.648796 1708 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:29.651202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:29.651356 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:39.725651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 19:47:39.731809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:39.857937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:39.868832 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:39.908422 kubelet[1724]: E1008 19:47:39.908350 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:39.911536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:39.911777 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:49.976005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 19:47:49.982691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:50.082617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:50.087664 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:50.129447 kubelet[1739]: E1008 19:47:50.129322 1739 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:50.132509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:50.132854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:57.623721 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:47:57.634968 systemd[1]: Started sshd@0-188.245.175.188:22-139.178.89.65:47152.service - OpenSSH per-connection server daemon (139.178.89.65:47152). Oct 8 19:47:58.616546 sshd[1746]: Accepted publickey for core from 139.178.89.65 port 47152 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:47:58.621611 sshd[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:47:58.632719 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:47:58.641234 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:47:58.645119 systemd-logind[1445]: New session 1 of user core. Oct 8 19:47:58.655328 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:47:58.661871 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:47:58.668837 (systemd)[1750]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:58.777283 systemd[1750]: Queued start job for default target default.target. Oct 8 19:47:58.788536 systemd[1750]: Created slice app.slice - User Application Slice. Oct 8 19:47:58.788568 systemd[1750]: Reached target paths.target - Paths. Oct 8 19:47:58.788581 systemd[1750]: Reached target timers.target - Timers. Oct 8 19:47:58.789941 systemd[1750]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:47:58.810629 systemd[1750]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:47:58.810804 systemd[1750]: Reached target sockets.target - Sockets. Oct 8 19:47:58.810832 systemd[1750]: Reached target basic.target - Basic System. Oct 8 19:47:58.810897 systemd[1750]: Reached target default.target - Main User Target. Oct 8 19:47:58.810934 systemd[1750]: Startup finished in 136ms. Oct 8 19:47:58.811324 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:47:58.822841 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:47:59.514004 systemd[1]: Started sshd@1-188.245.175.188:22-139.178.89.65:47160.service - OpenSSH per-connection server daemon (139.178.89.65:47160). Oct 8 19:48:00.225765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 19:48:00.235743 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:00.354380 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:00.370006 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:48:00.414285 kubelet[1771]: E1008 19:48:00.414170 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:48:00.416769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:48:00.416935 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:48:00.497355 sshd[1761]: Accepted publickey for core from 139.178.89.65 port 47160 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:48:00.499745 sshd[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:48:00.505667 systemd-logind[1445]: New session 2 of user core. Oct 8 19:48:00.518762 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:48:01.180472 sshd[1761]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:01.187005 systemd[1]: sshd@1-188.245.175.188:22-139.178.89.65:47160.service: Deactivated successfully. Oct 8 19:48:01.189847 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:48:01.190889 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:48:01.192887 systemd-logind[1445]: Removed session 2. Oct 8 19:48:01.359961 systemd[1]: Started sshd@2-188.245.175.188:22-139.178.89.65:47168.service - OpenSSH per-connection server daemon (139.178.89.65:47168). Oct 8 19:48:02.334295 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 47168 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:48:02.336323 sshd[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:48:02.341891 systemd-logind[1445]: New session 3 of user core. Oct 8 19:48:02.350657 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:48:03.010911 sshd[1783]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:03.016383 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:48:03.016523 systemd[1]: sshd@2-188.245.175.188:22-139.178.89.65:47168.service: Deactivated successfully. Oct 8 19:48:03.019275 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:48:03.021991 systemd-logind[1445]: Removed session 3. Oct 8 19:48:03.181790 systemd[1]: Started sshd@3-188.245.175.188:22-139.178.89.65:47170.service - OpenSSH per-connection server daemon (139.178.89.65:47170). Oct 8 19:48:04.138517 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 47170 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:48:04.141167 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:48:04.147310 systemd-logind[1445]: New session 4 of user core. Oct 8 19:48:04.158751 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:48:04.809203 sshd[1790]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:04.815527 systemd[1]: sshd@3-188.245.175.188:22-139.178.89.65:47170.service: Deactivated successfully. Oct 8 19:48:04.819374 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:48:04.822179 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:48:04.824131 systemd-logind[1445]: Removed session 4. Oct 8 19:48:04.981774 systemd[1]: Started sshd@4-188.245.175.188:22-139.178.89.65:47176.service - OpenSSH per-connection server daemon (139.178.89.65:47176). Oct 8 19:48:05.941206 sshd[1797]: Accepted publickey for core from 139.178.89.65 port 47176 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:48:05.942971 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:48:05.948141 systemd-logind[1445]: New session 5 of user core. Oct 8 19:48:05.958672 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:48:06.478327 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:48:06.478667 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:48:06.495967 sudo[1800]: pam_unix(sudo:session): session closed for user root Oct 8 19:48:06.653415 sshd[1797]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:06.659203 systemd[1]: sshd@4-188.245.175.188:22-139.178.89.65:47176.service: Deactivated successfully. Oct 8 19:48:06.662273 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:48:06.664490 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:48:06.665939 systemd-logind[1445]: Removed session 5. Oct 8 19:48:06.833784 systemd[1]: Started sshd@5-188.245.175.188:22-139.178.89.65:45956.service - OpenSSH per-connection server daemon (139.178.89.65:45956). Oct 8 19:48:07.853601 sshd[1805]: Accepted publickey for core from 139.178.89.65 port 45956 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:48:07.855598 sshd[1805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:48:07.861656 systemd-logind[1445]: New session 6 of user core. Oct 8 19:48:07.870738 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:48:08.391096 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:48:08.391496 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:48:08.396191 sudo[1809]: pam_unix(sudo:session): session closed for user root Oct 8 19:48:08.401242 sudo[1808]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:48:08.401553 sudo[1808]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:48:08.418190 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:48:08.419312 auditctl[1812]: No rules Oct 8 19:48:08.419750 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:48:08.421494 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:48:08.424038 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:48:08.462809 augenrules[1830]: No rules Oct 8 19:48:08.465482 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:48:08.466920 sudo[1808]: pam_unix(sudo:session): session closed for user root Oct 8 19:48:08.631778 sshd[1805]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:08.638238 systemd[1]: sshd@5-188.245.175.188:22-139.178.89.65:45956.service: Deactivated successfully. Oct 8 19:48:08.640609 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:48:08.641942 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:48:08.643856 systemd-logind[1445]: Removed session 6. Oct 8 19:48:08.808142 systemd[1]: Started sshd@6-188.245.175.188:22-139.178.89.65:45972.service - OpenSSH per-connection server daemon (139.178.89.65:45972). Oct 8 19:48:09.823264 sshd[1838]: Accepted publickey for core from 139.178.89.65 port 45972 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:48:09.825667 sshd[1838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:48:09.831646 systemd-logind[1445]: New session 7 of user core. Oct 8 19:48:09.843079 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:48:10.358299 sudo[1841]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:48:10.359322 sudo[1841]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:48:10.475709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 19:48:10.486978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:10.626620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:10.631450 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:48:10.677585 kubelet[1863]: E1008 19:48:10.677043 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:48:10.679232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:48:10.679484 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:48:10.758800 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:48:10.759221 (dockerd)[1871]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:48:11.047458 dockerd[1871]: time="2024-10-08T19:48:11.046897173Z" level=info msg="Starting up" Oct 8 19:48:11.149952 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1359547235-merged.mount: Deactivated successfully. Oct 8 19:48:11.177012 dockerd[1871]: time="2024-10-08T19:48:11.176748927Z" level=info msg="Loading containers: start." Oct 8 19:48:11.307467 kernel: Initializing XFRM netlink socket Oct 8 19:48:11.398324 systemd-networkd[1364]: docker0: Link UP Oct 8 19:48:11.419847 dockerd[1871]: time="2024-10-08T19:48:11.419795358Z" level=info msg="Loading containers: done." Oct 8 19:48:11.436092 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2800615808-merged.mount: Deactivated successfully. Oct 8 19:48:11.437336 dockerd[1871]: time="2024-10-08T19:48:11.437279681Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:48:11.437798 dockerd[1871]: time="2024-10-08T19:48:11.437763204Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:48:11.437931 dockerd[1871]: time="2024-10-08T19:48:11.437910445Z" level=info msg="Daemon has completed initialization" Oct 8 19:48:11.477477 dockerd[1871]: time="2024-10-08T19:48:11.476716478Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:48:11.476892 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:48:12.047276 containerd[1465]: time="2024-10-08T19:48:12.047203052Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 19:48:12.715348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692371460.mount: Deactivated successfully. Oct 8 19:48:13.715363 containerd[1465]: time="2024-10-08T19:48:13.715298401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:13.716699 containerd[1465]: time="2024-10-08T19:48:13.716617651Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=25691613" Oct 8 19:48:13.717886 containerd[1465]: time="2024-10-08T19:48:13.717725340Z" level=info msg="ImageCreate event name:\"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:13.725211 containerd[1465]: time="2024-10-08T19:48:13.724978599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:13.727410 containerd[1465]: time="2024-10-08T19:48:13.727082616Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"25688321\" in 1.679825724s" Oct 8 19:48:13.727410 containerd[1465]: time="2024-10-08T19:48:13.727149537Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\"" Oct 8 19:48:13.728688 containerd[1465]: time="2024-10-08T19:48:13.728381067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 19:48:15.082354 containerd[1465]: time="2024-10-08T19:48:15.082294025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:15.083362 containerd[1465]: time="2024-10-08T19:48:15.083319002Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=22460106" Oct 8 19:48:15.084455 containerd[1465]: time="2024-10-08T19:48:15.084413420Z" level=info msg="ImageCreate event name:\"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:15.087339 containerd[1465]: time="2024-10-08T19:48:15.087284747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:15.088719 containerd[1465]: time="2024-10-08T19:48:15.088580368Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"23947353\" in 1.360150901s" Oct 8 19:48:15.088719 containerd[1465]: time="2024-10-08T19:48:15.088620449Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\"" Oct 8 19:48:15.089511 containerd[1465]: time="2024-10-08T19:48:15.089216419Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 19:48:16.372274 containerd[1465]: time="2024-10-08T19:48:16.372221954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:16.373843 containerd[1465]: time="2024-10-08T19:48:16.373643977Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=17018578" Oct 8 19:48:16.373843 containerd[1465]: time="2024-10-08T19:48:16.373794500Z" level=info msg="ImageCreate event name:\"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:16.377449 containerd[1465]: time="2024-10-08T19:48:16.376923150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:16.378557 containerd[1465]: time="2024-10-08T19:48:16.378130810Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"18505843\" in 1.288878911s" Oct 8 19:48:16.378557 containerd[1465]: time="2024-10-08T19:48:16.378172650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\"" Oct 8 19:48:16.378716 containerd[1465]: time="2024-10-08T19:48:16.378679499Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 19:48:17.346026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55092213.mount: Deactivated successfully. Oct 8 19:48:17.680801 containerd[1465]: time="2024-10-08T19:48:17.680610009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:17.682076 containerd[1465]: time="2024-10-08T19:48:17.682012111Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=26753341" Oct 8 19:48:17.682773 containerd[1465]: time="2024-10-08T19:48:17.682684802Z" level=info msg="ImageCreate event name:\"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:17.685185 containerd[1465]: time="2024-10-08T19:48:17.685128441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:17.686090 containerd[1465]: time="2024-10-08T19:48:17.685940814Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"26752334\" in 1.307232314s" Oct 8 19:48:17.686090 containerd[1465]: time="2024-10-08T19:48:17.685985534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\"" Oct 8 19:48:17.686877 containerd[1465]: time="2024-10-08T19:48:17.686469942Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:48:18.262950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707556943.mount: Deactivated successfully. Oct 8 19:48:18.918764 containerd[1465]: time="2024-10-08T19:48:18.918706884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:18.919988 containerd[1465]: time="2024-10-08T19:48:18.919937023Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Oct 8 19:48:18.921414 containerd[1465]: time="2024-10-08T19:48:18.921360925Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:18.927212 containerd[1465]: time="2024-10-08T19:48:18.927122455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:18.929327 containerd[1465]: time="2024-10-08T19:48:18.929158127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.242652385s" Oct 8 19:48:18.929327 containerd[1465]: time="2024-10-08T19:48:18.929204968Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:48:18.929919 containerd[1465]: time="2024-10-08T19:48:18.929688335Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 19:48:19.495174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005563687.mount: Deactivated successfully. Oct 8 19:48:19.505149 containerd[1465]: time="2024-10-08T19:48:19.505062276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:19.507541 containerd[1465]: time="2024-10-08T19:48:19.507372631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Oct 8 19:48:19.508637 containerd[1465]: time="2024-10-08T19:48:19.508561130Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:19.510913 containerd[1465]: time="2024-10-08T19:48:19.510855165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:19.511976 containerd[1465]: time="2024-10-08T19:48:19.511567576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 581.83392ms" Oct 8 19:48:19.511976 containerd[1465]: time="2024-10-08T19:48:19.511601376Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 8 19:48:19.512371 containerd[1465]: time="2024-10-08T19:48:19.512248146Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 19:48:20.138068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471559079.mount: Deactivated successfully. Oct 8 19:48:20.725236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 19:48:20.732767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:20.864615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:20.868037 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:48:20.925302 kubelet[2189]: E1008 19:48:20.925102 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:48:20.927453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:48:20.928446 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:48:21.541888 containerd[1465]: time="2024-10-08T19:48:21.541828579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:21.543464 containerd[1465]: time="2024-10-08T19:48:21.543238480Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=65868242" Oct 8 19:48:21.545126 containerd[1465]: time="2024-10-08T19:48:21.545061028Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:21.550098 containerd[1465]: time="2024-10-08T19:48:21.550001941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:21.552023 containerd[1465]: time="2024-10-08T19:48:21.551839128Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.039542541s" Oct 8 19:48:21.552023 containerd[1465]: time="2024-10-08T19:48:21.551891169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Oct 8 19:48:26.916263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:26.927492 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:26.954614 systemd[1]: Reloading requested from client PID 2224 ('systemctl') (unit session-7.scope)... Oct 8 19:48:26.954628 systemd[1]: Reloading... Oct 8 19:48:27.069461 zram_generator::config[2266]: No configuration found. Oct 8 19:48:27.163591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:48:27.229389 systemd[1]: Reloading finished in 274 ms. Oct 8 19:48:27.284696 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:48:27.284757 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:48:27.284980 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:27.288103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:27.425677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:27.425862 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:48:27.472647 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:27.472647 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:48:27.472647 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:27.473045 kubelet[2312]: I1008 19:48:27.472811 2312 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:48:28.266292 kubelet[2312]: I1008 19:48:28.266233 2312 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:48:28.266292 kubelet[2312]: I1008 19:48:28.266274 2312 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:48:28.266586 kubelet[2312]: I1008 19:48:28.266552 2312 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:48:28.300572 kubelet[2312]: E1008 19:48:28.300525 2312 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.175.188:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:28.300712 kubelet[2312]: I1008 19:48:28.300672 2312 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:48:28.310978 kubelet[2312]: E1008 19:48:28.310920 2312 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:48:28.310978 kubelet[2312]: I1008 19:48:28.310964 2312 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:48:28.320198 kubelet[2312]: I1008 19:48:28.319910 2312 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:48:28.320933 kubelet[2312]: I1008 19:48:28.320913 2312 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:48:28.321197 kubelet[2312]: I1008 19:48:28.321169 2312 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:48:28.321449 kubelet[2312]: I1008 19:48:28.321248 2312 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-2-870ec424ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:48:28.321722 kubelet[2312]: I1008 19:48:28.321708 2312 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:48:28.322317 kubelet[2312]: I1008 19:48:28.321770 2312 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:48:28.322317 kubelet[2312]: I1008 19:48:28.322024 2312 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:28.324362 kubelet[2312]: I1008 19:48:28.324340 2312 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:48:28.324482 kubelet[2312]: I1008 19:48:28.324471 2312 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:48:28.324618 kubelet[2312]: I1008 19:48:28.324609 2312 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:48:28.324673 kubelet[2312]: I1008 19:48:28.324664 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:48:28.330672 kubelet[2312]: W1008 19:48:28.330592 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.175.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-2-870ec424ae&limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:28.330779 kubelet[2312]: E1008 19:48:28.330685 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.175.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-2-870ec424ae&limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:28.331296 kubelet[2312]: W1008 19:48:28.331238 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.175.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:28.331345 kubelet[2312]: E1008 19:48:28.331304 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.175.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:28.332007 kubelet[2312]: I1008 19:48:28.331972 2312 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:48:28.336672 kubelet[2312]: I1008 19:48:28.335272 2312 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:48:28.338662 kubelet[2312]: W1008 19:48:28.338624 2312 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:48:28.340361 kubelet[2312]: I1008 19:48:28.339715 2312 server.go:1269] "Started kubelet" Oct 8 19:48:28.341214 kubelet[2312]: I1008 19:48:28.341188 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:48:28.347453 kubelet[2312]: E1008 19:48:28.344577 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.175.188:6443/api/v1/namespaces/default/events\": dial tcp 188.245.175.188:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-2-870ec424ae.17fc9205455d93fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-2-870ec424ae,UID:ci-4081-1-0-2-870ec424ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-2-870ec424ae,},FirstTimestamp:2024-10-08 19:48:28.339680252 +0000 UTC m=+0.908551247,LastTimestamp:2024-10-08 19:48:28.339680252 +0000 UTC m=+0.908551247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-2-870ec424ae,}" Oct 8 19:48:28.347856 kubelet[2312]: I1008 19:48:28.347823 2312 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:48:28.348201 kubelet[2312]: E1008 19:48:28.348172 2312 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-2-870ec424ae\" not found" Oct 8 19:48:28.348649 kubelet[2312]: I1008 19:48:28.348606 2312 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:48:28.350132 kubelet[2312]: I1008 19:48:28.350108 2312 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:48:28.350875 kubelet[2312]: I1008 19:48:28.350840 2312 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:48:28.350947 kubelet[2312]: I1008 19:48:28.350916 2312 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:48:28.351539 kubelet[2312]: I1008 19:48:28.351472 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:48:28.351942 kubelet[2312]: I1008 19:48:28.351913 2312 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:48:28.352288 kubelet[2312]: I1008 19:48:28.352266 2312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:48:28.352894 kubelet[2312]: E1008 19:48:28.352852 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-2-870ec424ae?timeout=10s\": dial tcp 188.245.175.188:6443: connect: connection refused" interval="200ms" Oct 8 19:48:28.353846 kubelet[2312]: W1008 19:48:28.353796 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.175.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:28.353989 kubelet[2312]: E1008 19:48:28.353965 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.175.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:28.355621 kubelet[2312]: E1008 19:48:28.355597 2312 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:48:28.356617 kubelet[2312]: I1008 19:48:28.356598 2312 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:48:28.356713 kubelet[2312]: I1008 19:48:28.356704 2312 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:48:28.356848 kubelet[2312]: I1008 19:48:28.356832 2312 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:48:28.361787 kubelet[2312]: I1008 19:48:28.361736 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:48:28.362775 kubelet[2312]: I1008 19:48:28.362741 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:48:28.362775 kubelet[2312]: I1008 19:48:28.362770 2312 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:48:28.362885 kubelet[2312]: I1008 19:48:28.362790 2312 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:48:28.362885 kubelet[2312]: E1008 19:48:28.362832 2312 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:48:28.370802 kubelet[2312]: W1008 19:48:28.370747 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.175.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:28.370802 kubelet[2312]: E1008 19:48:28.370807 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.175.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:28.393271 kubelet[2312]: I1008 19:48:28.393233 2312 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:48:28.393271 kubelet[2312]: I1008 19:48:28.393258 2312 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:48:28.393271 kubelet[2312]: I1008 19:48:28.393282 2312 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:28.395863 kubelet[2312]: I1008 19:48:28.395831 2312 policy_none.go:49] "None policy: Start" Oct 8 19:48:28.396532 kubelet[2312]: I1008 19:48:28.396516 2312 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:48:28.396589 kubelet[2312]: I1008 19:48:28.396543 2312 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:48:28.405369 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:48:28.418051 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:48:28.422049 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:48:28.433926 kubelet[2312]: I1008 19:48:28.433876 2312 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:48:28.435112 kubelet[2312]: I1008 19:48:28.434368 2312 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:48:28.435112 kubelet[2312]: I1008 19:48:28.434388 2312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:48:28.435112 kubelet[2312]: I1008 19:48:28.434850 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:48:28.439609 kubelet[2312]: E1008 19:48:28.439555 2312 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-2-870ec424ae\" not found" Oct 8 19:48:28.481361 systemd[1]: Created slice kubepods-burstable-podc27f2a069e417c76bd5c270eab222d6e.slice - libcontainer container kubepods-burstable-podc27f2a069e417c76bd5c270eab222d6e.slice. Oct 8 19:48:28.508225 systemd[1]: Created slice kubepods-burstable-poda3130daf7afb5fac66a5f1b57584a121.slice - libcontainer container kubepods-burstable-poda3130daf7afb5fac66a5f1b57584a121.slice. Oct 8 19:48:28.515173 systemd[1]: Created slice kubepods-burstable-pod6c09c8e6c662f23c4739e5c827373ce8.slice - libcontainer container kubepods-burstable-pod6c09c8e6c662f23c4739e5c827373ce8.slice. Oct 8 19:48:28.537759 kubelet[2312]: I1008 19:48:28.537650 2312 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.538992 kubelet[2312]: E1008 19:48:28.538961 2312 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.175.188:6443/api/v1/nodes\": dial tcp 188.245.175.188:6443: connect: connection refused" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552369 kubelet[2312]: I1008 19:48:28.552291 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552369 kubelet[2312]: I1008 19:48:28.552368 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c27f2a069e417c76bd5c270eab222d6e-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-2-870ec424ae\" (UID: \"c27f2a069e417c76bd5c270eab222d6e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552632 kubelet[2312]: I1008 19:48:28.552411 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c27f2a069e417c76bd5c270eab222d6e-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-2-870ec424ae\" (UID: \"c27f2a069e417c76bd5c270eab222d6e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552632 kubelet[2312]: I1008 19:48:28.552469 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552632 kubelet[2312]: I1008 19:48:28.552508 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552632 kubelet[2312]: I1008 19:48:28.552545 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c27f2a069e417c76bd5c270eab222d6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-2-870ec424ae\" (UID: \"c27f2a069e417c76bd5c270eab222d6e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552632 kubelet[2312]: I1008 19:48:28.552578 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552881 kubelet[2312]: I1008 19:48:28.552610 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.552881 kubelet[2312]: I1008 19:48:28.552647 2312 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3130daf7afb5fac66a5f1b57584a121-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-2-870ec424ae\" (UID: \"a3130daf7afb5fac66a5f1b57584a121\") " pod="kube-system/kube-scheduler-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.553826 kubelet[2312]: E1008 19:48:28.553762 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-2-870ec424ae?timeout=10s\": dial tcp 188.245.175.188:6443: connect: connection refused" interval="400ms" Oct 8 19:48:28.742111 kubelet[2312]: I1008 19:48:28.741718 2312 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.742345 kubelet[2312]: E1008 19:48:28.742301 2312 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.175.188:6443/api/v1/nodes\": dial tcp 188.245.175.188:6443: connect: connection refused" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:28.804113 containerd[1465]: time="2024-10-08T19:48:28.803420965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-2-870ec424ae,Uid:c27f2a069e417c76bd5c270eab222d6e,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:28.814216 containerd[1465]: time="2024-10-08T19:48:28.814096028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-2-870ec424ae,Uid:a3130daf7afb5fac66a5f1b57584a121,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:28.818638 containerd[1465]: time="2024-10-08T19:48:28.818512967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-2-870ec424ae,Uid:6c09c8e6c662f23c4739e5c827373ce8,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:28.954668 kubelet[2312]: E1008 19:48:28.954495 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-2-870ec424ae?timeout=10s\": dial tcp 188.245.175.188:6443: connect: connection refused" interval="800ms" Oct 8 19:48:29.145738 kubelet[2312]: I1008 19:48:29.145596 2312 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:29.146361 kubelet[2312]: E1008 19:48:29.145988 2312 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.175.188:6443/api/v1/nodes\": dial tcp 188.245.175.188:6443: connect: connection refused" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:29.202700 kubelet[2312]: W1008 19:48:29.202569 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.175.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-2-870ec424ae&limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:29.202948 kubelet[2312]: E1008 19:48:29.202868 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.175.188:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-2-870ec424ae&limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:29.267553 kubelet[2312]: W1008 19:48:29.267474 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.175.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:29.267819 kubelet[2312]: E1008 19:48:29.267562 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.175.188:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:29.347500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3915695711.mount: Deactivated successfully. Oct 8 19:48:29.354470 containerd[1465]: time="2024-10-08T19:48:29.354396297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:29.355830 containerd[1465]: time="2024-10-08T19:48:29.355799555Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Oct 8 19:48:29.359256 containerd[1465]: time="2024-10-08T19:48:29.359010158Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:29.360686 containerd[1465]: time="2024-10-08T19:48:29.360647219Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:29.361488 containerd[1465]: time="2024-10-08T19:48:29.361412069Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:48:29.362046 containerd[1465]: time="2024-10-08T19:48:29.362017877Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:29.363040 containerd[1465]: time="2024-10-08T19:48:29.362973450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:48:29.367112 containerd[1465]: time="2024-10-08T19:48:29.366995343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:29.370133 containerd[1465]: time="2024-10-08T19:48:29.369865221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 566.287773ms" Oct 8 19:48:29.372535 containerd[1465]: time="2024-10-08T19:48:29.372484175Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 558.252266ms" Oct 8 19:48:29.374373 containerd[1465]: time="2024-10-08T19:48:29.374195318Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.529229ms" Oct 8 19:48:29.522941 containerd[1465]: time="2024-10-08T19:48:29.522729113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:29.523321 containerd[1465]: time="2024-10-08T19:48:29.522885955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:29.523321 containerd[1465]: time="2024-10-08T19:48:29.523297001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:29.523807 containerd[1465]: time="2024-10-08T19:48:29.523698006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:29.527308 containerd[1465]: time="2024-10-08T19:48:29.526804807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:29.528330 containerd[1465]: time="2024-10-08T19:48:29.527465415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:29.528330 containerd[1465]: time="2024-10-08T19:48:29.527501936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:29.528330 containerd[1465]: time="2024-10-08T19:48:29.527634338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:29.536386 containerd[1465]: time="2024-10-08T19:48:29.536264211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:29.536819 containerd[1465]: time="2024-10-08T19:48:29.536464854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:29.537382 containerd[1465]: time="2024-10-08T19:48:29.537195343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:29.537382 containerd[1465]: time="2024-10-08T19:48:29.537305305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:29.552687 systemd[1]: Started cri-containerd-b0cfdc1c57a5abbbb11d85658199ca7ed38e7515c19d8575f00e61f250dcc238.scope - libcontainer container b0cfdc1c57a5abbbb11d85658199ca7ed38e7515c19d8575f00e61f250dcc238. Oct 8 19:48:29.560462 systemd[1]: Started cri-containerd-1291a9b762e421a686fabcc0c9787ba078bb69f9b4bf8c112dcf507aa3583c15.scope - libcontainer container 1291a9b762e421a686fabcc0c9787ba078bb69f9b4bf8c112dcf507aa3583c15. Oct 8 19:48:29.571090 systemd[1]: Started cri-containerd-fb07da4510f63f3c5bef77d3cce72294f0dcdc4ecfc1dea80bf4f1ba02bea8b3.scope - libcontainer container fb07da4510f63f3c5bef77d3cce72294f0dcdc4ecfc1dea80bf4f1ba02bea8b3. Oct 8 19:48:29.605658 containerd[1465]: time="2024-10-08T19:48:29.605603804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-2-870ec424ae,Uid:a3130daf7afb5fac66a5f1b57584a121,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0cfdc1c57a5abbbb11d85658199ca7ed38e7515c19d8575f00e61f250dcc238\"" Oct 8 19:48:29.611017 containerd[1465]: time="2024-10-08T19:48:29.610880554Z" level=info msg="CreateContainer within sandbox \"b0cfdc1c57a5abbbb11d85658199ca7ed38e7515c19d8575f00e61f250dcc238\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:48:29.636648 kubelet[2312]: W1008 19:48:29.636382 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.175.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:29.638165 kubelet[2312]: E1008 19:48:29.637916 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.175.188:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:29.638277 containerd[1465]: time="2024-10-08T19:48:29.638049791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-2-870ec424ae,Uid:6c09c8e6c662f23c4739e5c827373ce8,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb07da4510f63f3c5bef77d3cce72294f0dcdc4ecfc1dea80bf4f1ba02bea8b3\"" Oct 8 19:48:29.640845 containerd[1465]: time="2024-10-08T19:48:29.640329461Z" level=info msg="CreateContainer within sandbox \"b0cfdc1c57a5abbbb11d85658199ca7ed38e7515c19d8575f00e61f250dcc238\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217\"" Oct 8 19:48:29.641713 containerd[1465]: time="2024-10-08T19:48:29.641677759Z" level=info msg="StartContainer for \"45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217\"" Oct 8 19:48:29.643400 containerd[1465]: time="2024-10-08T19:48:29.643365941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-2-870ec424ae,Uid:c27f2a069e417c76bd5c270eab222d6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1291a9b762e421a686fabcc0c9787ba078bb69f9b4bf8c112dcf507aa3583c15\"" Oct 8 19:48:29.644515 containerd[1465]: time="2024-10-08T19:48:29.644348834Z" level=info msg="CreateContainer within sandbox \"fb07da4510f63f3c5bef77d3cce72294f0dcdc4ecfc1dea80bf4f1ba02bea8b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:48:29.660924 containerd[1465]: time="2024-10-08T19:48:29.660871252Z" level=info msg="CreateContainer within sandbox \"fb07da4510f63f3c5bef77d3cce72294f0dcdc4ecfc1dea80bf4f1ba02bea8b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76\"" Oct 8 19:48:29.661984 containerd[1465]: time="2024-10-08T19:48:29.661938026Z" level=info msg="CreateContainer within sandbox \"1291a9b762e421a686fabcc0c9787ba078bb69f9b4bf8c112dcf507aa3583c15\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:48:29.670641 containerd[1465]: time="2024-10-08T19:48:29.669692208Z" level=info msg="StartContainer for \"c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76\"" Oct 8 19:48:29.675326 systemd[1]: Started cri-containerd-45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217.scope - libcontainer container 45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217. Oct 8 19:48:29.707517 containerd[1465]: time="2024-10-08T19:48:29.706001926Z" level=info msg="CreateContainer within sandbox \"1291a9b762e421a686fabcc0c9787ba078bb69f9b4bf8c112dcf507aa3583c15\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f78fe7558e6ffde1e007ea971bcd56aa3744dda6ebec6a99e20b1ca8583b7dea\"" Oct 8 19:48:29.708012 containerd[1465]: time="2024-10-08T19:48:29.707981192Z" level=info msg="StartContainer for \"f78fe7558e6ffde1e007ea971bcd56aa3744dda6ebec6a99e20b1ca8583b7dea\"" Oct 8 19:48:29.711513 systemd[1]: Started cri-containerd-c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76.scope - libcontainer container c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76. Oct 8 19:48:29.739951 containerd[1465]: time="2024-10-08T19:48:29.739578848Z" level=info msg="StartContainer for \"45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217\" returns successfully" Oct 8 19:48:29.752831 systemd[1]: Started cri-containerd-f78fe7558e6ffde1e007ea971bcd56aa3744dda6ebec6a99e20b1ca8583b7dea.scope - libcontainer container f78fe7558e6ffde1e007ea971bcd56aa3744dda6ebec6a99e20b1ca8583b7dea. Oct 8 19:48:29.755963 kubelet[2312]: E1008 19:48:29.755816 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.175.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-2-870ec424ae?timeout=10s\": dial tcp 188.245.175.188:6443: connect: connection refused" interval="1.6s" Oct 8 19:48:29.776910 kubelet[2312]: W1008 19:48:29.776660 2312 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.175.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.175.188:6443: connect: connection refused Oct 8 19:48:29.776910 kubelet[2312]: E1008 19:48:29.776746 2312 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.175.188:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.175.188:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:48:29.782941 containerd[1465]: time="2024-10-08T19:48:29.782484373Z" level=info msg="StartContainer for \"c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76\" returns successfully" Oct 8 19:48:29.833716 containerd[1465]: time="2024-10-08T19:48:29.833666127Z" level=info msg="StartContainer for \"f78fe7558e6ffde1e007ea971bcd56aa3744dda6ebec6a99e20b1ca8583b7dea\" returns successfully" Oct 8 19:48:29.949408 kubelet[2312]: I1008 19:48:29.949109 2312 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:29.949699 kubelet[2312]: E1008 19:48:29.949673 2312 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.175.188:6443/api/v1/nodes\": dial tcp 188.245.175.188:6443: connect: connection refused" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:31.553449 kubelet[2312]: I1008 19:48:31.552691 2312 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:32.921804 kubelet[2312]: E1008 19:48:32.921742 2312 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-1-0-2-870ec424ae\" not found" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:33.016208 kubelet[2312]: I1008 19:48:33.015934 2312 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:33.016208 kubelet[2312]: E1008 19:48:33.015990 2312 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-1-0-2-870ec424ae\": node \"ci-4081-1-0-2-870ec424ae\" not found" Oct 8 19:48:33.059708 kubelet[2312]: E1008 19:48:33.059435 2312 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-1-0-2-870ec424ae.17fc9205455d93fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-2-870ec424ae,UID:ci-4081-1-0-2-870ec424ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-2-870ec424ae,},FirstTimestamp:2024-10-08 19:48:28.339680252 +0000 UTC m=+0.908551247,LastTimestamp:2024-10-08 19:48:28.339680252 +0000 UTC m=+0.908551247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-2-870ec424ae,}" Oct 8 19:48:33.114456 kubelet[2312]: E1008 19:48:33.114283 2312 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-1-0-2-870ec424ae.17fc920546503500 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-2-870ec424ae,UID:ci-4081-1-0-2-870ec424ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-2-870ec424ae,},FirstTimestamp:2024-10-08 19:48:28.355581184 +0000 UTC m=+0.924452179,LastTimestamp:2024-10-08 19:48:28.355581184 +0000 UTC m=+0.924452179,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-2-870ec424ae,}" Oct 8 19:48:33.169939 kubelet[2312]: E1008 19:48:33.169563 2312 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-1-0-2-870ec424ae.17fc9205487c1b56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-2-870ec424ae,UID:ci-4081-1-0-2-870ec424ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-1-0-2-870ec424ae status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-2-870ec424ae,},FirstTimestamp:2024-10-08 19:48:28.39201263 +0000 UTC m=+0.960883665,LastTimestamp:2024-10-08 19:48:28.39201263 +0000 UTC m=+0.960883665,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-2-870ec424ae,}" Oct 8 19:48:33.334114 kubelet[2312]: I1008 19:48:33.333964 2312 apiserver.go:52] "Watching apiserver" Oct 8 19:48:33.351582 kubelet[2312]: I1008 19:48:33.351413 2312 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:48:34.900112 systemd[1]: Reloading requested from client PID 2585 ('systemctl') (unit session-7.scope)... Oct 8 19:48:34.900132 systemd[1]: Reloading... Oct 8 19:48:34.997552 zram_generator::config[2624]: No configuration found. Oct 8 19:48:35.117354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:48:35.202877 systemd[1]: Reloading finished in 302 ms. Oct 8 19:48:35.241984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:35.257229 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:48:35.258583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:35.258801 systemd[1]: kubelet.service: Consumed 1.357s CPU time, 115.8M memory peak, 0B memory swap peak. Oct 8 19:48:35.265714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:35.388305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:35.388722 (kubelet)[2668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:48:35.439086 kubelet[2668]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:35.439086 kubelet[2668]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:48:35.439086 kubelet[2668]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:35.440603 kubelet[2668]: I1008 19:48:35.439619 2668 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:48:35.451560 kubelet[2668]: I1008 19:48:35.450963 2668 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:48:35.451560 kubelet[2668]: I1008 19:48:35.450996 2668 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:48:35.451560 kubelet[2668]: I1008 19:48:35.451257 2668 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:48:35.455572 kubelet[2668]: I1008 19:48:35.454485 2668 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:48:35.457197 kubelet[2668]: I1008 19:48:35.457012 2668 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:48:35.464463 kubelet[2668]: E1008 19:48:35.464412 2668 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:48:35.464463 kubelet[2668]: I1008 19:48:35.464457 2668 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:48:35.468084 kubelet[2668]: I1008 19:48:35.467628 2668 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:48:35.468084 kubelet[2668]: I1008 19:48:35.467794 2668 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:48:35.468084 kubelet[2668]: I1008 19:48:35.467932 2668 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:48:35.471111 kubelet[2668]: I1008 19:48:35.467960 2668 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-2-870ec424ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:48:35.471111 kubelet[2668]: I1008 19:48:35.468370 2668 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:48:35.471111 kubelet[2668]: I1008 19:48:35.468404 2668 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:48:35.471111 kubelet[2668]: I1008 19:48:35.468487 2668 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:35.471111 kubelet[2668]: I1008 19:48:35.469112 2668 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:48:35.471368 kubelet[2668]: I1008 19:48:35.469141 2668 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:48:35.471368 kubelet[2668]: I1008 19:48:35.469200 2668 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:48:35.471368 kubelet[2668]: I1008 19:48:35.469214 2668 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:48:35.475443 kubelet[2668]: I1008 19:48:35.474114 2668 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:48:35.475443 kubelet[2668]: I1008 19:48:35.474909 2668 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:48:35.478864 kubelet[2668]: I1008 19:48:35.478817 2668 server.go:1269] "Started kubelet" Oct 8 19:48:35.490492 kubelet[2668]: I1008 19:48:35.487711 2668 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:48:35.496496 kubelet[2668]: I1008 19:48:35.496383 2668 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:48:35.498919 kubelet[2668]: I1008 19:48:35.498889 2668 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:48:35.499142 kubelet[2668]: E1008 19:48:35.499118 2668 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-1-0-2-870ec424ae\" not found" Oct 8 19:48:35.502751 kubelet[2668]: I1008 19:48:35.502726 2668 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:48:35.502993 kubelet[2668]: I1008 19:48:35.502865 2668 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:48:35.506460 kubelet[2668]: I1008 19:48:35.504952 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:48:35.506460 kubelet[2668]: I1008 19:48:35.506393 2668 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:48:35.506460 kubelet[2668]: I1008 19:48:35.506463 2668 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:48:35.506609 kubelet[2668]: I1008 19:48:35.506481 2668 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:48:35.506609 kubelet[2668]: E1008 19:48:35.506534 2668 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:48:35.509647 kubelet[2668]: I1008 19:48:35.486445 2668 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:48:35.513467 kubelet[2668]: I1008 19:48:35.512087 2668 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:48:35.517460 kubelet[2668]: I1008 19:48:35.516221 2668 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:48:35.519450 kubelet[2668]: I1008 19:48:35.517939 2668 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:48:35.519966 kubelet[2668]: I1008 19:48:35.519942 2668 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:48:35.520272 kubelet[2668]: I1008 19:48:35.520122 2668 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:48:35.526923 kubelet[2668]: I1008 19:48:35.526873 2668 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:48:35.579113 kubelet[2668]: I1008 19:48:35.578707 2668 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:48:35.579113 kubelet[2668]: I1008 19:48:35.578726 2668 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:48:35.579113 kubelet[2668]: I1008 19:48:35.578749 2668 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:35.579113 kubelet[2668]: I1008 19:48:35.578987 2668 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:48:35.579113 kubelet[2668]: I1008 19:48:35.579004 2668 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:48:35.579113 kubelet[2668]: I1008 19:48:35.579023 2668 policy_none.go:49] "None policy: Start" Oct 8 19:48:35.580041 kubelet[2668]: I1008 19:48:35.579985 2668 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:48:35.580041 kubelet[2668]: I1008 19:48:35.580045 2668 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:48:35.580409 kubelet[2668]: I1008 19:48:35.580377 2668 state_mem.go:75] "Updated machine memory state" Oct 8 19:48:35.585207 kubelet[2668]: I1008 19:48:35.585181 2668 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:48:35.586336 kubelet[2668]: I1008 19:48:35.585748 2668 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:48:35.586336 kubelet[2668]: I1008 19:48:35.585762 2668 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:48:35.586336 kubelet[2668]: I1008 19:48:35.586024 2668 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:48:35.689607 kubelet[2668]: I1008 19:48:35.689571 2668 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.699182 kubelet[2668]: I1008 19:48:35.699145 2668 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.699679 kubelet[2668]: I1008 19:48:35.699247 2668 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805526 kubelet[2668]: I1008 19:48:35.805114 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805526 kubelet[2668]: I1008 19:48:35.805185 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805526 kubelet[2668]: I1008 19:48:35.805220 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3130daf7afb5fac66a5f1b57584a121-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-2-870ec424ae\" (UID: \"a3130daf7afb5fac66a5f1b57584a121\") " pod="kube-system/kube-scheduler-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805526 kubelet[2668]: I1008 19:48:35.805244 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c27f2a069e417c76bd5c270eab222d6e-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-2-870ec424ae\" (UID: \"c27f2a069e417c76bd5c270eab222d6e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805526 kubelet[2668]: I1008 19:48:35.805268 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805843 kubelet[2668]: I1008 19:48:35.805296 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805843 kubelet[2668]: I1008 19:48:35.805316 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c09c8e6c662f23c4739e5c827373ce8-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" (UID: \"6c09c8e6c662f23c4739e5c827373ce8\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805843 kubelet[2668]: I1008 19:48:35.805338 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c27f2a069e417c76bd5c270eab222d6e-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-2-870ec424ae\" (UID: \"c27f2a069e417c76bd5c270eab222d6e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:35.805843 kubelet[2668]: I1008 19:48:35.805360 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c27f2a069e417c76bd5c270eab222d6e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-2-870ec424ae\" (UID: \"c27f2a069e417c76bd5c270eab222d6e\") " pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:36.471263 kubelet[2668]: I1008 19:48:36.471215 2668 apiserver.go:52] "Watching apiserver" Oct 8 19:48:36.503218 kubelet[2668]: I1008 19:48:36.503180 2668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:48:36.588549 kubelet[2668]: E1008 19:48:36.586618 2668 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-1-0-2-870ec424ae\" already exists" pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:36.588549 kubelet[2668]: E1008 19:48:36.586848 2668 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-1-0-2-870ec424ae\" already exists" pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" Oct 8 19:48:36.591481 kubelet[2668]: I1008 19:48:36.591416 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-2-870ec424ae" podStartSLOduration=1.591398892 podStartE2EDuration="1.591398892s" podCreationTimestamp="2024-10-08 19:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:36.591027247 +0000 UTC m=+1.195682391" watchObservedRunningTime="2024-10-08 19:48:36.591398892 +0000 UTC m=+1.196053916" Oct 8 19:48:36.638312 kubelet[2668]: I1008 19:48:36.637951 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-2-870ec424ae" podStartSLOduration=1.63791705 podStartE2EDuration="1.63791705s" podCreationTimestamp="2024-10-08 19:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:36.613196993 +0000 UTC m=+1.217852017" watchObservedRunningTime="2024-10-08 19:48:36.63791705 +0000 UTC m=+1.242572034" Oct 8 19:48:36.638312 kubelet[2668]: I1008 19:48:36.638065 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-2-870ec424ae" podStartSLOduration=1.6380600109999999 podStartE2EDuration="1.638060011s" podCreationTimestamp="2024-10-08 19:48:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:36.637723167 +0000 UTC m=+1.242378231" watchObservedRunningTime="2024-10-08 19:48:36.638060011 +0000 UTC m=+1.242715035" Oct 8 19:48:40.775406 sudo[1841]: pam_unix(sudo:session): session closed for user root Oct 8 19:48:40.940046 sshd[1838]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:40.944147 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:48:40.944343 systemd[1]: sshd@6-188.245.175.188:22-139.178.89.65:45972.service: Deactivated successfully. Oct 8 19:48:40.947715 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:48:40.947909 systemd[1]: session-7.scope: Consumed 7.519s CPU time, 150.8M memory peak, 0B memory swap peak. Oct 8 19:48:40.949571 systemd-logind[1445]: Removed session 7. Oct 8 19:48:41.347778 kubelet[2668]: I1008 19:48:41.347520 2668 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:48:41.348197 containerd[1465]: time="2024-10-08T19:48:41.348091863Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:48:41.349820 kubelet[2668]: I1008 19:48:41.348672 2668 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:48:42.282911 systemd[1]: Created slice kubepods-besteffort-pod5bcfe23a_789b_4f8a_9adc_b1a9fc6c3ecc.slice - libcontainer container kubepods-besteffort-pod5bcfe23a_789b_4f8a_9adc_b1a9fc6c3ecc.slice. Oct 8 19:48:42.342228 kubelet[2668]: I1008 19:48:42.342164 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc-xtables-lock\") pod \"kube-proxy-xc4p5\" (UID: \"5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc\") " pod="kube-system/kube-proxy-xc4p5" Oct 8 19:48:42.342228 kubelet[2668]: I1008 19:48:42.342234 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc-kube-proxy\") pod \"kube-proxy-xc4p5\" (UID: \"5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc\") " pod="kube-system/kube-proxy-xc4p5" Oct 8 19:48:42.342479 kubelet[2668]: I1008 19:48:42.342262 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc-lib-modules\") pod \"kube-proxy-xc4p5\" (UID: \"5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc\") " pod="kube-system/kube-proxy-xc4p5" Oct 8 19:48:42.342479 kubelet[2668]: I1008 19:48:42.342302 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swjmn\" (UniqueName: \"kubernetes.io/projected/5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc-kube-api-access-swjmn\") pod \"kube-proxy-xc4p5\" (UID: \"5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc\") " pod="kube-system/kube-proxy-xc4p5" Oct 8 19:48:42.517404 systemd[1]: Created slice kubepods-besteffort-poda80400ae_a8cc_4158_9ad2_ae733fb9f79f.slice - libcontainer container kubepods-besteffort-poda80400ae_a8cc_4158_9ad2_ae733fb9f79f.slice. Oct 8 19:48:42.543238 kubelet[2668]: I1008 19:48:42.542966 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a80400ae-a8cc-4158-9ad2-ae733fb9f79f-var-lib-calico\") pod \"tigera-operator-55748b469f-dcvjs\" (UID: \"a80400ae-a8cc-4158-9ad2-ae733fb9f79f\") " pod="tigera-operator/tigera-operator-55748b469f-dcvjs" Oct 8 19:48:42.543238 kubelet[2668]: I1008 19:48:42.543022 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nglsr\" (UniqueName: \"kubernetes.io/projected/a80400ae-a8cc-4158-9ad2-ae733fb9f79f-kube-api-access-nglsr\") pod \"tigera-operator-55748b469f-dcvjs\" (UID: \"a80400ae-a8cc-4158-9ad2-ae733fb9f79f\") " pod="tigera-operator/tigera-operator-55748b469f-dcvjs" Oct 8 19:48:42.593966 containerd[1465]: time="2024-10-08T19:48:42.593853185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc4p5,Uid:5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:42.624099 containerd[1465]: time="2024-10-08T19:48:42.624016322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:42.624478 containerd[1465]: time="2024-10-08T19:48:42.624256084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:42.624478 containerd[1465]: time="2024-10-08T19:48:42.624302125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:42.624591 containerd[1465]: time="2024-10-08T19:48:42.624416726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:42.646620 systemd[1]: Started cri-containerd-a23ff23c9069977ab331bc3f8c4acc99c355a6cecf132330da2f78b3c64a624b.scope - libcontainer container a23ff23c9069977ab331bc3f8c4acc99c355a6cecf132330da2f78b3c64a624b. Oct 8 19:48:42.676508 containerd[1465]: time="2024-10-08T19:48:42.676464028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xc4p5,Uid:5bcfe23a-789b-4f8a-9adc-b1a9fc6c3ecc,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23ff23c9069977ab331bc3f8c4acc99c355a6cecf132330da2f78b3c64a624b\"" Oct 8 19:48:42.680216 containerd[1465]: time="2024-10-08T19:48:42.680162749Z" level=info msg="CreateContainer within sandbox \"a23ff23c9069977ab331bc3f8c4acc99c355a6cecf132330da2f78b3c64a624b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:48:42.698468 containerd[1465]: time="2024-10-08T19:48:42.698383353Z" level=info msg="CreateContainer within sandbox \"a23ff23c9069977ab331bc3f8c4acc99c355a6cecf132330da2f78b3c64a624b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c86e7204a7b97a850e7d9efb66ce0d25e4e58dcd2c08ee44c01c37915eb193e7\"" Oct 8 19:48:42.703894 containerd[1465]: time="2024-10-08T19:48:42.701954872Z" level=info msg="StartContainer for \"c86e7204a7b97a850e7d9efb66ce0d25e4e58dcd2c08ee44c01c37915eb193e7\"" Oct 8 19:48:42.731633 systemd[1]: Started cri-containerd-c86e7204a7b97a850e7d9efb66ce0d25e4e58dcd2c08ee44c01c37915eb193e7.scope - libcontainer container c86e7204a7b97a850e7d9efb66ce0d25e4e58dcd2c08ee44c01c37915eb193e7. Oct 8 19:48:42.761987 containerd[1465]: time="2024-10-08T19:48:42.761943703Z" level=info msg="StartContainer for \"c86e7204a7b97a850e7d9efb66ce0d25e4e58dcd2c08ee44c01c37915eb193e7\" returns successfully" Oct 8 19:48:42.823563 containerd[1465]: time="2024-10-08T19:48:42.823378149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-dcvjs,Uid:a80400ae-a8cc-4158-9ad2-ae733fb9f79f,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:48:42.866087 containerd[1465]: time="2024-10-08T19:48:42.865624221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:42.866087 containerd[1465]: time="2024-10-08T19:48:42.865717262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:42.866087 containerd[1465]: time="2024-10-08T19:48:42.865742743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:42.866087 containerd[1465]: time="2024-10-08T19:48:42.865874664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:42.896844 systemd[1]: Started cri-containerd-d5858d2da00f297d7dad5135b91cdb2f0cfcf8fc5c24b7f075f3c36d215fe45e.scope - libcontainer container d5858d2da00f297d7dad5135b91cdb2f0cfcf8fc5c24b7f075f3c36d215fe45e. Oct 8 19:48:42.939736 containerd[1465]: time="2024-10-08T19:48:42.939695009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-dcvjs,Uid:a80400ae-a8cc-4158-9ad2-ae733fb9f79f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d5858d2da00f297d7dad5135b91cdb2f0cfcf8fc5c24b7f075f3c36d215fe45e\"" Oct 8 19:48:42.945463 containerd[1465]: time="2024-10-08T19:48:42.945403233Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:48:44.436377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3427269861.mount: Deactivated successfully. Oct 8 19:48:44.768481 containerd[1465]: time="2024-10-08T19:48:44.767963411Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:44.769528 containerd[1465]: time="2024-10-08T19:48:44.769489028Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485927" Oct 8 19:48:44.770680 containerd[1465]: time="2024-10-08T19:48:44.770648280Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:44.774199 containerd[1465]: time="2024-10-08T19:48:44.774134319Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:44.775210 containerd[1465]: time="2024-10-08T19:48:44.774948567Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.829356612s" Oct 8 19:48:44.775210 containerd[1465]: time="2024-10-08T19:48:44.774990968Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:48:44.779123 containerd[1465]: time="2024-10-08T19:48:44.778839730Z" level=info msg="CreateContainer within sandbox \"d5858d2da00f297d7dad5135b91cdb2f0cfcf8fc5c24b7f075f3c36d215fe45e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:48:44.796142 containerd[1465]: time="2024-10-08T19:48:44.796001118Z" level=info msg="CreateContainer within sandbox \"d5858d2da00f297d7dad5135b91cdb2f0cfcf8fc5c24b7f075f3c36d215fe45e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73\"" Oct 8 19:48:44.796991 containerd[1465]: time="2024-10-08T19:48:44.796931568Z" level=info msg="StartContainer for \"1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73\"" Oct 8 19:48:44.823699 systemd[1]: run-containerd-runc-k8s.io-1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73-runc.wxPagL.mount: Deactivated successfully. Oct 8 19:48:44.832653 systemd[1]: Started cri-containerd-1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73.scope - libcontainer container 1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73. Oct 8 19:48:44.867940 containerd[1465]: time="2024-10-08T19:48:44.866935533Z" level=info msg="StartContainer for \"1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73\" returns successfully" Oct 8 19:48:45.598268 kubelet[2668]: I1008 19:48:45.598203 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xc4p5" podStartSLOduration=3.5981837800000003 podStartE2EDuration="3.59818378s" podCreationTimestamp="2024-10-08 19:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:43.588225904 +0000 UTC m=+8.192880968" watchObservedRunningTime="2024-10-08 19:48:45.59818378 +0000 UTC m=+10.202838844" Oct 8 19:48:45.598833 kubelet[2668]: I1008 19:48:45.598331 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-dcvjs" podStartSLOduration=1.766153418 podStartE2EDuration="3.598325942s" podCreationTimestamp="2024-10-08 19:48:42 +0000 UTC" firstStartedPulling="2024-10-08 19:48:42.944709865 +0000 UTC m=+7.549364889" lastFinishedPulling="2024-10-08 19:48:44.776882389 +0000 UTC m=+9.381537413" observedRunningTime="2024-10-08 19:48:45.5981351 +0000 UTC m=+10.202790084" watchObservedRunningTime="2024-10-08 19:48:45.598325942 +0000 UTC m=+10.202980966" Oct 8 19:48:49.110789 systemd[1]: Created slice kubepods-besteffort-podb6bbe0f8_890c_4b3b_a726_e81ddeb6458f.slice - libcontainer container kubepods-besteffort-podb6bbe0f8_890c_4b3b_a726_e81ddeb6458f.slice. Oct 8 19:48:49.185871 kubelet[2668]: I1008 19:48:49.185734 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-tigera-ca-bundle\") pod \"calico-typha-645f586479-75x28\" (UID: \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\") " pod="calico-system/calico-typha-645f586479-75x28" Oct 8 19:48:49.185871 kubelet[2668]: I1008 19:48:49.185786 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-typha-certs\") pod \"calico-typha-645f586479-75x28\" (UID: \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\") " pod="calico-system/calico-typha-645f586479-75x28" Oct 8 19:48:49.185871 kubelet[2668]: I1008 19:48:49.185806 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72cnv\" (UniqueName: \"kubernetes.io/projected/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-kube-api-access-72cnv\") pod \"calico-typha-645f586479-75x28\" (UID: \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\") " pod="calico-system/calico-typha-645f586479-75x28" Oct 8 19:48:49.208480 systemd[1]: Created slice kubepods-besteffort-pod1e19a671_2c1a_49eb_997d_fb6b4d6a409d.slice - libcontainer container kubepods-besteffort-pod1e19a671_2c1a_49eb_997d_fb6b4d6a409d.slice. Oct 8 19:48:49.287505 kubelet[2668]: I1008 19:48:49.286248 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-lib-modules\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287505 kubelet[2668]: I1008 19:48:49.286320 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-net-dir\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287505 kubelet[2668]: I1008 19:48:49.286357 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-flexvol-driver-host\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287505 kubelet[2668]: I1008 19:48:49.286396 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-xtables-lock\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287505 kubelet[2668]: I1008 19:48:49.286450 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-run-calico\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287951 kubelet[2668]: I1008 19:48:49.286487 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-tigera-ca-bundle\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287951 kubelet[2668]: I1008 19:48:49.286524 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-lib-calico\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287951 kubelet[2668]: I1008 19:48:49.286610 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-bin-dir\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287951 kubelet[2668]: I1008 19:48:49.286646 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89bl\" (UniqueName: \"kubernetes.io/projected/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-kube-api-access-g89bl\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.287951 kubelet[2668]: I1008 19:48:49.286681 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-policysync\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.288116 kubelet[2668]: I1008 19:48:49.286715 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-node-certs\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.288116 kubelet[2668]: I1008 19:48:49.286747 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-log-dir\") pod \"calico-node-47bp9\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " pod="calico-system/calico-node-47bp9" Oct 8 19:48:49.331448 kubelet[2668]: E1008 19:48:49.331381 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tsj5" podUID="9faa311e-d6bb-4ee4-9110-b3120539788f" Oct 8 19:48:49.387373 kubelet[2668]: I1008 19:48:49.387269 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9faa311e-d6bb-4ee4-9110-b3120539788f-socket-dir\") pod \"csi-node-driver-4tsj5\" (UID: \"9faa311e-d6bb-4ee4-9110-b3120539788f\") " pod="calico-system/csi-node-driver-4tsj5" Oct 8 19:48:49.388972 kubelet[2668]: I1008 19:48:49.388845 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9faa311e-d6bb-4ee4-9110-b3120539788f-kubelet-dir\") pod \"csi-node-driver-4tsj5\" (UID: \"9faa311e-d6bb-4ee4-9110-b3120539788f\") " pod="calico-system/csi-node-driver-4tsj5" Oct 8 19:48:49.390763 kubelet[2668]: I1008 19:48:49.390531 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9faa311e-d6bb-4ee4-9110-b3120539788f-registration-dir\") pod \"csi-node-driver-4tsj5\" (UID: \"9faa311e-d6bb-4ee4-9110-b3120539788f\") " pod="calico-system/csi-node-driver-4tsj5" Oct 8 19:48:49.390763 kubelet[2668]: I1008 19:48:49.390586 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9faa311e-d6bb-4ee4-9110-b3120539788f-varrun\") pod \"csi-node-driver-4tsj5\" (UID: \"9faa311e-d6bb-4ee4-9110-b3120539788f\") " pod="calico-system/csi-node-driver-4tsj5" Oct 8 19:48:49.390763 kubelet[2668]: I1008 19:48:49.390603 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkwcx\" (UniqueName: \"kubernetes.io/projected/9faa311e-d6bb-4ee4-9110-b3120539788f-kube-api-access-zkwcx\") pod \"csi-node-driver-4tsj5\" (UID: \"9faa311e-d6bb-4ee4-9110-b3120539788f\") " pod="calico-system/csi-node-driver-4tsj5" Oct 8 19:48:49.392582 kubelet[2668]: E1008 19:48:49.392504 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.392582 kubelet[2668]: W1008 19:48:49.392539 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.392582 kubelet[2668]: E1008 19:48:49.392556 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.397073 kubelet[2668]: E1008 19:48:49.396999 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.397073 kubelet[2668]: W1008 19:48:49.397026 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.397073 kubelet[2668]: E1008 19:48:49.397042 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.415264 containerd[1465]: time="2024-10-08T19:48:49.415210771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-645f586479-75x28,Uid:b6bbe0f8-890c-4b3b-a726-e81ddeb6458f,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:49.427181 kubelet[2668]: E1008 19:48:49.426898 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.427181 kubelet[2668]: W1008 19:48:49.426931 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.427181 kubelet[2668]: E1008 19:48:49.426951 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.451163 containerd[1465]: time="2024-10-08T19:48:49.451059784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:49.451300 containerd[1465]: time="2024-10-08T19:48:49.451190425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:49.451300 containerd[1465]: time="2024-10-08T19:48:49.451222746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:49.452034 containerd[1465]: time="2024-10-08T19:48:49.451985794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:49.470729 systemd[1]: Started cri-containerd-d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633.scope - libcontainer container d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633. Oct 8 19:48:49.492034 kubelet[2668]: E1008 19:48:49.491714 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.492816 kubelet[2668]: W1008 19:48:49.492243 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.492816 kubelet[2668]: E1008 19:48:49.492274 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.494409 kubelet[2668]: E1008 19:48:49.494181 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.494409 kubelet[2668]: W1008 19:48:49.494200 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.494409 kubelet[2668]: E1008 19:48:49.494230 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.495014 kubelet[2668]: E1008 19:48:49.494814 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.495014 kubelet[2668]: W1008 19:48:49.494828 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.495014 kubelet[2668]: E1008 19:48:49.494848 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.496936 kubelet[2668]: E1008 19:48:49.496808 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.496936 kubelet[2668]: W1008 19:48:49.496826 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.496936 kubelet[2668]: E1008 19:48:49.496886 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.497610 kubelet[2668]: E1008 19:48:49.497466 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.497610 kubelet[2668]: W1008 19:48:49.497478 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.497610 kubelet[2668]: E1008 19:48:49.497492 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.498401 kubelet[2668]: E1008 19:48:49.498385 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.500952 kubelet[2668]: W1008 19:48:49.500778 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.500952 kubelet[2668]: E1008 19:48:49.500802 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.501155 kubelet[2668]: E1008 19:48:49.501116 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.501282 kubelet[2668]: W1008 19:48:49.501129 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.501376 kubelet[2668]: E1008 19:48:49.501333 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.501671 kubelet[2668]: E1008 19:48:49.501645 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.501671 kubelet[2668]: W1008 19:48:49.501657 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.501893 kubelet[2668]: E1008 19:48:49.501829 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.502027 kubelet[2668]: E1008 19:48:49.502016 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.502123 kubelet[2668]: W1008 19:48:49.502076 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.502289 kubelet[2668]: E1008 19:48:49.502260 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.503502 kubelet[2668]: E1008 19:48:49.503468 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.503502 kubelet[2668]: W1008 19:48:49.503484 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.504062 kubelet[2668]: E1008 19:48:49.503941 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.505158 kubelet[2668]: E1008 19:48:49.504387 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.505158 kubelet[2668]: W1008 19:48:49.504399 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.505421 kubelet[2668]: E1008 19:48:49.505313 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.505611 kubelet[2668]: E1008 19:48:49.505591 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.505776 kubelet[2668]: W1008 19:48:49.505681 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.505816 kubelet[2668]: E1008 19:48:49.505797 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.507472 kubelet[2668]: E1008 19:48:49.506819 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.507472 kubelet[2668]: W1008 19:48:49.506865 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.507472 kubelet[2668]: E1008 19:48:49.507054 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.507472 kubelet[2668]: W1008 19:48:49.507063 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.507472 kubelet[2668]: E1008 19:48:49.507207 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.507472 kubelet[2668]: W1008 19:48:49.507215 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.507472 kubelet[2668]: E1008 19:48:49.507331 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.507472 kubelet[2668]: W1008 19:48:49.507358 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.507472 kubelet[2668]: E1008 19:48:49.507371 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.508635 kubelet[2668]: E1008 19:48:49.507591 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.508635 kubelet[2668]: W1008 19:48:49.507700 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.508635 kubelet[2668]: E1008 19:48:49.507716 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.508635 kubelet[2668]: E1008 19:48:49.508233 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.508635 kubelet[2668]: W1008 19:48:49.508460 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.508635 kubelet[2668]: E1008 19:48:49.508481 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.508773 kubelet[2668]: E1008 19:48:49.508718 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.509793 kubelet[2668]: E1008 19:48:49.509292 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.509793 kubelet[2668]: W1008 19:48:49.509315 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.509793 kubelet[2668]: E1008 19:48:49.509331 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.512339 kubelet[2668]: E1008 19:48:49.509924 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.512339 kubelet[2668]: W1008 19:48:49.509946 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.512339 kubelet[2668]: E1008 19:48:49.509962 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.512339 kubelet[2668]: E1008 19:48:49.509993 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.512999 kubelet[2668]: E1008 19:48:49.512970 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.512999 kubelet[2668]: W1008 19:48:49.512998 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.513078 kubelet[2668]: E1008 19:48:49.513021 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.514362 kubelet[2668]: E1008 19:48:49.513859 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.514362 kubelet[2668]: W1008 19:48:49.513878 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.514362 kubelet[2668]: E1008 19:48:49.513892 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.514362 kubelet[2668]: E1008 19:48:49.514223 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.517587 kubelet[2668]: E1008 19:48:49.517554 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.517587 kubelet[2668]: W1008 19:48:49.517580 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.517703 kubelet[2668]: E1008 19:48:49.517600 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.520545 kubelet[2668]: E1008 19:48:49.520253 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.520545 kubelet[2668]: W1008 19:48:49.520272 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.520545 kubelet[2668]: E1008 19:48:49.520288 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.521455 kubelet[2668]: E1008 19:48:49.521004 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.521455 kubelet[2668]: W1008 19:48:49.521017 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.521455 kubelet[2668]: E1008 19:48:49.521029 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.522305 containerd[1465]: time="2024-10-08T19:48:49.522250844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47bp9,Uid:1e19a671-2c1a-49eb-997d-fb6b4d6a409d,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:49.526054 containerd[1465]: time="2024-10-08T19:48:49.526015763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-645f586479-75x28,Uid:b6bbe0f8-890c-4b3b-a726-e81ddeb6458f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\"" Oct 8 19:48:49.527920 kubelet[2668]: E1008 19:48:49.527889 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:49.528041 kubelet[2668]: W1008 19:48:49.528026 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:49.529109 kubelet[2668]: E1008 19:48:49.528116 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:49.529442 containerd[1465]: time="2024-10-08T19:48:49.529387518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:48:49.550998 containerd[1465]: time="2024-10-08T19:48:49.550892102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:49.552469 containerd[1465]: time="2024-10-08T19:48:49.550956663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:49.552730 containerd[1465]: time="2024-10-08T19:48:49.552494839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:49.553280 containerd[1465]: time="2024-10-08T19:48:49.553167406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:49.572639 systemd[1]: Started cri-containerd-da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1.scope - libcontainer container da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1. Oct 8 19:48:49.612473 containerd[1465]: time="2024-10-08T19:48:49.611681174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47bp9,Uid:1e19a671-2c1a-49eb-997d-fb6b4d6a409d,Namespace:calico-system,Attempt:0,} returns sandbox id \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\"" Oct 8 19:48:51.428864 containerd[1465]: time="2024-10-08T19:48:51.428334281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:51.430079 containerd[1465]: time="2024-10-08T19:48:51.430042218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:48:51.430821 containerd[1465]: time="2024-10-08T19:48:51.430777986Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:51.434463 containerd[1465]: time="2024-10-08T19:48:51.433949898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:51.434605 containerd[1465]: time="2024-10-08T19:48:51.434580185Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.905006185s" Oct 8 19:48:51.434680 containerd[1465]: time="2024-10-08T19:48:51.434666186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:48:51.438922 containerd[1465]: time="2024-10-08T19:48:51.438792468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:48:51.462889 containerd[1465]: time="2024-10-08T19:48:51.462836513Z" level=info msg="CreateContainer within sandbox \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:48:51.485629 containerd[1465]: time="2024-10-08T19:48:51.485489104Z" level=info msg="CreateContainer within sandbox \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\"" Oct 8 19:48:51.487297 containerd[1465]: time="2024-10-08T19:48:51.486224672Z" level=info msg="StartContainer for \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\"" Oct 8 19:48:51.510129 kubelet[2668]: E1008 19:48:51.508491 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tsj5" podUID="9faa311e-d6bb-4ee4-9110-b3120539788f" Oct 8 19:48:51.526626 systemd[1]: Started cri-containerd-dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a.scope - libcontainer container dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a. Oct 8 19:48:51.588337 containerd[1465]: time="2024-10-08T19:48:51.588262593Z" level=info msg="StartContainer for \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\" returns successfully" Oct 8 19:48:51.616019 containerd[1465]: time="2024-10-08T19:48:51.615963396Z" level=info msg="StopContainer for \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\" with timeout 300 (s)" Oct 8 19:48:51.618154 containerd[1465]: time="2024-10-08T19:48:51.617989336Z" level=info msg="Stop container \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\" with signal terminated" Oct 8 19:48:51.630170 systemd[1]: cri-containerd-dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a.scope: Deactivated successfully. Oct 8 19:48:51.730446 containerd[1465]: time="2024-10-08T19:48:51.730343003Z" level=info msg="shim disconnected" id=dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a namespace=k8s.io Oct 8 19:48:51.730446 containerd[1465]: time="2024-10-08T19:48:51.730441684Z" level=warning msg="cleaning up after shim disconnected" id=dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a namespace=k8s.io Oct 8 19:48:51.730446 containerd[1465]: time="2024-10-08T19:48:51.730451044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:51.748955 containerd[1465]: time="2024-10-08T19:48:51.748885752Z" level=info msg="StopContainer for \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\" returns successfully" Oct 8 19:48:51.750444 containerd[1465]: time="2024-10-08T19:48:51.749955643Z" level=info msg="StopPodSandbox for \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\"" Oct 8 19:48:51.750444 containerd[1465]: time="2024-10-08T19:48:51.750061844Z" level=info msg="Container to stop \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:48:51.761754 systemd[1]: cri-containerd-d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633.scope: Deactivated successfully. Oct 8 19:48:51.803093 containerd[1465]: time="2024-10-08T19:48:51.802898903Z" level=info msg="shim disconnected" id=d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633 namespace=k8s.io Oct 8 19:48:51.803093 containerd[1465]: time="2024-10-08T19:48:51.803049745Z" level=warning msg="cleaning up after shim disconnected" id=d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633 namespace=k8s.io Oct 8 19:48:51.803408 containerd[1465]: time="2024-10-08T19:48:51.803165306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:51.833504 containerd[1465]: time="2024-10-08T19:48:51.833388774Z" level=info msg="TearDown network for sandbox \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" successfully" Oct 8 19:48:51.834062 containerd[1465]: time="2024-10-08T19:48:51.833521216Z" level=info msg="StopPodSandbox for \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" returns successfully" Oct 8 19:48:51.878057 kubelet[2668]: E1008 19:48:51.877130 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6bbe0f8-890c-4b3b-a726-e81ddeb6458f" containerName="calico-typha" Oct 8 19:48:51.878057 kubelet[2668]: I1008 19:48:51.877249 2668 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bbe0f8-890c-4b3b-a726-e81ddeb6458f" containerName="calico-typha" Oct 8 19:48:51.887798 systemd[1]: Created slice kubepods-besteffort-pod104ca4d3_c443_4e40_83ee_5d813d865ad6.slice - libcontainer container kubepods-besteffort-pod104ca4d3_c443_4e40_83ee_5d813d865ad6.slice. Oct 8 19:48:51.910015 kubelet[2668]: E1008 19:48:51.909853 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.910015 kubelet[2668]: W1008 19:48:51.909881 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.910015 kubelet[2668]: E1008 19:48:51.909901 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.910471 kubelet[2668]: E1008 19:48:51.910100 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.910471 kubelet[2668]: W1008 19:48:51.910110 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.910471 kubelet[2668]: E1008 19:48:51.910121 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.910965 kubelet[2668]: E1008 19:48:51.910723 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.910965 kubelet[2668]: W1008 19:48:51.910769 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.910965 kubelet[2668]: E1008 19:48:51.910784 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.911357 kubelet[2668]: E1008 19:48:51.911260 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.911357 kubelet[2668]: W1008 19:48:51.911275 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.911357 kubelet[2668]: E1008 19:48:51.911298 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.911842 kubelet[2668]: E1008 19:48:51.911685 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.911842 kubelet[2668]: W1008 19:48:51.911701 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.911842 kubelet[2668]: E1008 19:48:51.911713 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.912234 kubelet[2668]: E1008 19:48:51.912133 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.912234 kubelet[2668]: W1008 19:48:51.912146 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.912234 kubelet[2668]: E1008 19:48:51.912158 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.912527 kubelet[2668]: E1008 19:48:51.912402 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.912527 kubelet[2668]: W1008 19:48:51.912412 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.912767 kubelet[2668]: E1008 19:48:51.912629 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.913061 kubelet[2668]: E1008 19:48:51.912993 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.913061 kubelet[2668]: W1008 19:48:51.913006 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.913061 kubelet[2668]: E1008 19:48:51.913020 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.913566 kubelet[2668]: E1008 19:48:51.913445 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.913566 kubelet[2668]: W1008 19:48:51.913460 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.913566 kubelet[2668]: E1008 19:48:51.913472 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.913945 kubelet[2668]: E1008 19:48:51.913668 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.913945 kubelet[2668]: W1008 19:48:51.913678 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.913945 kubelet[2668]: E1008 19:48:51.913688 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.914219 kubelet[2668]: E1008 19:48:51.914104 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.914219 kubelet[2668]: W1008 19:48:51.914130 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.914219 kubelet[2668]: E1008 19:48:51.914142 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.914799 kubelet[2668]: E1008 19:48:51.914695 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.914799 kubelet[2668]: W1008 19:48:51.914720 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.914799 kubelet[2668]: E1008 19:48:51.914733 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.919760 kubelet[2668]: E1008 19:48:51.919731 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.920563 kubelet[2668]: W1008 19:48:51.919988 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.920563 kubelet[2668]: E1008 19:48:51.920018 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.920563 kubelet[2668]: I1008 19:48:51.920084 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72cnv\" (UniqueName: \"kubernetes.io/projected/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-kube-api-access-72cnv\") pod \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\" (UID: \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\") " Oct 8 19:48:51.921232 kubelet[2668]: E1008 19:48:51.921042 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.921232 kubelet[2668]: W1008 19:48:51.921066 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.921232 kubelet[2668]: E1008 19:48:51.921086 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.921232 kubelet[2668]: I1008 19:48:51.921116 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-tigera-ca-bundle\") pod \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\" (UID: \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\") " Oct 8 19:48:51.921939 kubelet[2668]: E1008 19:48:51.921710 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.921939 kubelet[2668]: W1008 19:48:51.921730 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.921939 kubelet[2668]: E1008 19:48:51.921746 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.921939 kubelet[2668]: I1008 19:48:51.921775 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-typha-certs\") pod \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\" (UID: \"b6bbe0f8-890c-4b3b-a726-e81ddeb6458f\") " Oct 8 19:48:51.922361 kubelet[2668]: E1008 19:48:51.922243 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.922361 kubelet[2668]: W1008 19:48:51.922263 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.922361 kubelet[2668]: E1008 19:48:51.922297 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.922807 kubelet[2668]: I1008 19:48:51.922346 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvcc\" (UniqueName: \"kubernetes.io/projected/104ca4d3-c443-4e40-83ee-5d813d865ad6-kube-api-access-lzvcc\") pod \"calico-typha-547d8f7c57-cr95w\" (UID: \"104ca4d3-c443-4e40-83ee-5d813d865ad6\") " pod="calico-system/calico-typha-547d8f7c57-cr95w" Oct 8 19:48:51.923871 kubelet[2668]: E1008 19:48:51.923733 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.923871 kubelet[2668]: W1008 19:48:51.923772 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.923871 kubelet[2668]: E1008 19:48:51.923791 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.924175 kubelet[2668]: I1008 19:48:51.923818 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/104ca4d3-c443-4e40-83ee-5d813d865ad6-tigera-ca-bundle\") pod \"calico-typha-547d8f7c57-cr95w\" (UID: \"104ca4d3-c443-4e40-83ee-5d813d865ad6\") " pod="calico-system/calico-typha-547d8f7c57-cr95w" Oct 8 19:48:51.924543 kubelet[2668]: E1008 19:48:51.924418 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.924543 kubelet[2668]: W1008 19:48:51.924476 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.924543 kubelet[2668]: E1008 19:48:51.924490 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.924543 kubelet[2668]: I1008 19:48:51.924512 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/104ca4d3-c443-4e40-83ee-5d813d865ad6-typha-certs\") pod \"calico-typha-547d8f7c57-cr95w\" (UID: \"104ca4d3-c443-4e40-83ee-5d813d865ad6\") " pod="calico-system/calico-typha-547d8f7c57-cr95w" Oct 8 19:48:51.926304 kubelet[2668]: E1008 19:48:51.925523 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.926304 kubelet[2668]: W1008 19:48:51.925541 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.926304 kubelet[2668]: E1008 19:48:51.925556 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.926304 kubelet[2668]: E1008 19:48:51.926237 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.926304 kubelet[2668]: W1008 19:48:51.926277 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.926304 kubelet[2668]: E1008 19:48:51.926293 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.926963 kubelet[2668]: E1008 19:48:51.926814 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.927098 kubelet[2668]: W1008 19:48:51.927009 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.927098 kubelet[2668]: E1008 19:48:51.927030 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.928364 kubelet[2668]: E1008 19:48:51.927583 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.928364 kubelet[2668]: W1008 19:48:51.927601 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.928625 kubelet[2668]: E1008 19:48:51.928586 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.929712 kubelet[2668]: E1008 19:48:51.929594 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.929712 kubelet[2668]: W1008 19:48:51.929630 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.929712 kubelet[2668]: E1008 19:48:51.929650 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.932130 kubelet[2668]: E1008 19:48:51.931782 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.932130 kubelet[2668]: W1008 19:48:51.931875 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.932130 kubelet[2668]: E1008 19:48:51.931953 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.933109 kubelet[2668]: E1008 19:48:51.933073 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.933239 kubelet[2668]: W1008 19:48:51.933222 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.933510 kubelet[2668]: E1008 19:48:51.933484 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.933510 kubelet[2668]: I1008 19:48:51.933415 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-kube-api-access-72cnv" (OuterVolumeSpecName: "kube-api-access-72cnv") pod "b6bbe0f8-890c-4b3b-a726-e81ddeb6458f" (UID: "b6bbe0f8-890c-4b3b-a726-e81ddeb6458f"). InnerVolumeSpecName "kube-api-access-72cnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:48:51.934070 kubelet[2668]: E1008 19:48:51.933950 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.934455 kubelet[2668]: W1008 19:48:51.934294 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.934455 kubelet[2668]: E1008 19:48:51.934323 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:51.935469 kubelet[2668]: I1008 19:48:51.934193 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "b6bbe0f8-890c-4b3b-a726-e81ddeb6458f" (UID: "b6bbe0f8-890c-4b3b-a726-e81ddeb6458f"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:48:51.935469 kubelet[2668]: I1008 19:48:51.935257 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "b6bbe0f8-890c-4b3b-a726-e81ddeb6458f" (UID: "b6bbe0f8-890c-4b3b-a726-e81ddeb6458f"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:48:51.935801 kubelet[2668]: E1008 19:48:51.935609 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:51.935801 kubelet[2668]: W1008 19:48:51.935739 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:51.935801 kubelet[2668]: E1008 19:48:51.935755 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.028177 kubelet[2668]: E1008 19:48:52.026087 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.028177 kubelet[2668]: W1008 19:48:52.026110 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.028177 kubelet[2668]: E1008 19:48:52.026129 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.029882 kubelet[2668]: E1008 19:48:52.029508 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.029882 kubelet[2668]: W1008 19:48:52.029533 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.029882 kubelet[2668]: E1008 19:48:52.029560 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.030778 kubelet[2668]: E1008 19:48:52.030660 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.030778 kubelet[2668]: W1008 19:48:52.030677 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.031779 kubelet[2668]: E1008 19:48:52.031642 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.031779 kubelet[2668]: E1008 19:48:52.031650 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.031779 kubelet[2668]: I1008 19:48:52.031728 2668 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-72cnv\" (UniqueName: \"kubernetes.io/projected/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-kube-api-access-72cnv\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:52.031779 kubelet[2668]: I1008 19:48:52.031739 2668 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-tigera-ca-bundle\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:52.031779 kubelet[2668]: I1008 19:48:52.031750 2668 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f-typha-certs\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:52.031779 kubelet[2668]: W1008 19:48:52.031657 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.032334 kubelet[2668]: E1008 19:48:52.031947 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.032334 kubelet[2668]: E1008 19:48:52.032289 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.032334 kubelet[2668]: W1008 19:48:52.032301 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.032683 kubelet[2668]: E1008 19:48:52.032571 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.032683 kubelet[2668]: W1008 19:48:52.032589 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.032683 kubelet[2668]: E1008 19:48:52.032603 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.033839 kubelet[2668]: E1008 19:48:52.033802 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.034625 kubelet[2668]: E1008 19:48:52.034605 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.034625 kubelet[2668]: W1008 19:48:52.034622 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.034758 kubelet[2668]: E1008 19:48:52.034645 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.034877 kubelet[2668]: E1008 19:48:52.034862 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.034877 kubelet[2668]: W1008 19:48:52.034873 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.034964 kubelet[2668]: E1008 19:48:52.034888 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.035143 kubelet[2668]: E1008 19:48:52.035128 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.035143 kubelet[2668]: W1008 19:48:52.035142 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.036538 kubelet[2668]: E1008 19:48:52.035224 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.036538 kubelet[2668]: E1008 19:48:52.035322 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.036538 kubelet[2668]: W1008 19:48:52.035331 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.036809 kubelet[2668]: E1008 19:48:52.036790 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.037379 kubelet[2668]: E1008 19:48:52.036815 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.037598 kubelet[2668]: W1008 19:48:52.037454 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.037598 kubelet[2668]: E1008 19:48:52.037478 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.037741 kubelet[2668]: E1008 19:48:52.037728 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.037900 kubelet[2668]: W1008 19:48:52.037790 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.037900 kubelet[2668]: E1008 19:48:52.037807 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.038110 kubelet[2668]: E1008 19:48:52.038096 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.038442 kubelet[2668]: W1008 19:48:52.038176 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.038442 kubelet[2668]: E1008 19:48:52.038192 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.039729 kubelet[2668]: E1008 19:48:52.039708 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.039849 kubelet[2668]: W1008 19:48:52.039817 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.040017 kubelet[2668]: E1008 19:48:52.040001 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.040784 kubelet[2668]: E1008 19:48:52.040749 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.040784 kubelet[2668]: W1008 19:48:52.040769 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.040784 kubelet[2668]: E1008 19:48:52.040782 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.041515 kubelet[2668]: E1008 19:48:52.041401 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.041515 kubelet[2668]: W1008 19:48:52.041419 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.041515 kubelet[2668]: E1008 19:48:52.041451 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.045783 kubelet[2668]: E1008 19:48:52.045709 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.045783 kubelet[2668]: W1008 19:48:52.045729 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.045783 kubelet[2668]: E1008 19:48:52.045746 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.056698 kubelet[2668]: E1008 19:48:52.056670 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.058469 kubelet[2668]: W1008 19:48:52.056691 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.058565 kubelet[2668]: E1008 19:48:52.058480 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.193007 containerd[1465]: time="2024-10-08T19:48:52.192955266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547d8f7c57-cr95w,Uid:104ca4d3-c443-4e40-83ee-5d813d865ad6,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:52.222278 containerd[1465]: time="2024-10-08T19:48:52.222085601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:52.223161 containerd[1465]: time="2024-10-08T19:48:52.222228522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:52.223161 containerd[1465]: time="2024-10-08T19:48:52.222245962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:52.224207 containerd[1465]: time="2024-10-08T19:48:52.224092021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:52.245522 systemd[1]: Started cri-containerd-c22b902eec7461fb280ef9a4a04a127e188af636a1b4457357396a92984d29df.scope - libcontainer container c22b902eec7461fb280ef9a4a04a127e188af636a1b4457357396a92984d29df. Oct 8 19:48:52.292583 containerd[1465]: time="2024-10-08T19:48:52.292097589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547d8f7c57-cr95w,Uid:104ca4d3-c443-4e40-83ee-5d813d865ad6,Namespace:calico-system,Attempt:0,} returns sandbox id \"c22b902eec7461fb280ef9a4a04a127e188af636a1b4457357396a92984d29df\"" Oct 8 19:48:52.305261 containerd[1465]: time="2024-10-08T19:48:52.305221201Z" level=info msg="CreateContainer within sandbox \"c22b902eec7461fb280ef9a4a04a127e188af636a1b4457357396a92984d29df\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:48:52.322392 containerd[1465]: time="2024-10-08T19:48:52.322208053Z" level=info msg="CreateContainer within sandbox \"c22b902eec7461fb280ef9a4a04a127e188af636a1b4457357396a92984d29df\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7465d3c10d3236ddf511c2ecdb96f5a231e166d0ece81ddff83b33a6f349ba77\"" Oct 8 19:48:52.324670 containerd[1465]: time="2024-10-08T19:48:52.324630318Z" level=info msg="StartContainer for \"7465d3c10d3236ddf511c2ecdb96f5a231e166d0ece81ddff83b33a6f349ba77\"" Oct 8 19:48:52.365669 systemd[1]: Started cri-containerd-7465d3c10d3236ddf511c2ecdb96f5a231e166d0ece81ddff83b33a6f349ba77.scope - libcontainer container 7465d3c10d3236ddf511c2ecdb96f5a231e166d0ece81ddff83b33a6f349ba77. Oct 8 19:48:52.409260 containerd[1465]: time="2024-10-08T19:48:52.408894850Z" level=info msg="StartContainer for \"7465d3c10d3236ddf511c2ecdb96f5a231e166d0ece81ddff83b33a6f349ba77\" returns successfully" Oct 8 19:48:52.459157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a-rootfs.mount: Deactivated successfully. Oct 8 19:48:52.459256 systemd[1]: var-lib-kubelet-pods-b6bbe0f8\x2d890c\x2d4b3b\x2da726\x2de81ddeb6458f-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Oct 8 19:48:52.459323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633-rootfs.mount: Deactivated successfully. Oct 8 19:48:52.459385 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633-shm.mount: Deactivated successfully. Oct 8 19:48:52.459450 systemd[1]: var-lib-kubelet-pods-b6bbe0f8\x2d890c\x2d4b3b\x2da726\x2de81ddeb6458f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d72cnv.mount: Deactivated successfully. Oct 8 19:48:52.459501 systemd[1]: var-lib-kubelet-pods-b6bbe0f8\x2d890c\x2d4b3b\x2da726\x2de81ddeb6458f-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Oct 8 19:48:52.620638 kubelet[2668]: I1008 19:48:52.618312 2668 scope.go:117] "RemoveContainer" containerID="dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a" Oct 8 19:48:52.625058 containerd[1465]: time="2024-10-08T19:48:52.624986955Z" level=info msg="RemoveContainer for \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\"" Oct 8 19:48:52.628552 systemd[1]: Removed slice kubepods-besteffort-podb6bbe0f8_890c_4b3b_a726_e81ddeb6458f.slice - libcontainer container kubepods-besteffort-podb6bbe0f8_890c_4b3b_a726_e81ddeb6458f.slice. Oct 8 19:48:52.632779 containerd[1465]: time="2024-10-08T19:48:52.632677993Z" level=info msg="RemoveContainer for \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\" returns successfully" Oct 8 19:48:52.633541 kubelet[2668]: I1008 19:48:52.633150 2668 scope.go:117] "RemoveContainer" containerID="dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a" Oct 8 19:48:52.633624 containerd[1465]: time="2024-10-08T19:48:52.633592082Z" level=error msg="ContainerStatus for \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\": not found" Oct 8 19:48:52.633866 kubelet[2668]: E1008 19:48:52.633800 2668 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\": not found" containerID="dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a" Oct 8 19:48:52.633866 kubelet[2668]: I1008 19:48:52.633833 2668 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a"} err="failed to get container status \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbfd94ef4fb83ad48563f3a110c1dfaa0e2032234877c0b60443ade68877805a\": not found" Oct 8 19:48:52.660626 kubelet[2668]: I1008 19:48:52.660554 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-547d8f7c57-cr95w" podStartSLOduration=2.660532914 podStartE2EDuration="2.660532914s" podCreationTimestamp="2024-10-08 19:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:52.650705815 +0000 UTC m=+17.255360839" watchObservedRunningTime="2024-10-08 19:48:52.660532914 +0000 UTC m=+17.265187938" Oct 8 19:48:52.722523 kubelet[2668]: E1008 19:48:52.722485 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.722523 kubelet[2668]: W1008 19:48:52.722518 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.722873 kubelet[2668]: E1008 19:48:52.722540 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.722999 kubelet[2668]: E1008 19:48:52.722978 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.723046 kubelet[2668]: W1008 19:48:52.722998 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.723046 kubelet[2668]: E1008 19:48:52.723012 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.723241 kubelet[2668]: E1008 19:48:52.723224 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.723241 kubelet[2668]: W1008 19:48:52.723239 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.723403 kubelet[2668]: E1008 19:48:52.723250 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.723470 kubelet[2668]: E1008 19:48:52.723421 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.723470 kubelet[2668]: W1008 19:48:52.723444 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.723470 kubelet[2668]: E1008 19:48:52.723454 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.723672 kubelet[2668]: E1008 19:48:52.723659 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.723672 kubelet[2668]: W1008 19:48:52.723671 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.723742 kubelet[2668]: E1008 19:48:52.723681 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.723936 kubelet[2668]: E1008 19:48:52.723918 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.723936 kubelet[2668]: W1008 19:48:52.723935 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.723991 kubelet[2668]: E1008 19:48:52.723946 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.724165 kubelet[2668]: E1008 19:48:52.724148 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.724165 kubelet[2668]: W1008 19:48:52.724164 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.724243 kubelet[2668]: E1008 19:48:52.724174 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.724410 kubelet[2668]: E1008 19:48:52.724394 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.724410 kubelet[2668]: W1008 19:48:52.724408 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.724494 kubelet[2668]: E1008 19:48:52.724418 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.724641 kubelet[2668]: E1008 19:48:52.724625 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.724641 kubelet[2668]: W1008 19:48:52.724639 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.724702 kubelet[2668]: E1008 19:48:52.724650 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.724874 kubelet[2668]: E1008 19:48:52.724858 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.724874 kubelet[2668]: W1008 19:48:52.724872 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.725033 kubelet[2668]: E1008 19:48:52.724881 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.725071 kubelet[2668]: E1008 19:48:52.725047 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.725071 kubelet[2668]: W1008 19:48:52.725056 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.725071 kubelet[2668]: E1008 19:48:52.725065 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.725640 kubelet[2668]: E1008 19:48:52.725622 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.725640 kubelet[2668]: W1008 19:48:52.725639 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.725723 kubelet[2668]: E1008 19:48:52.725650 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.725837 kubelet[2668]: E1008 19:48:52.725822 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.725868 kubelet[2668]: W1008 19:48:52.725837 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.725868 kubelet[2668]: E1008 19:48:52.725846 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.726247 kubelet[2668]: E1008 19:48:52.726197 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.726247 kubelet[2668]: W1008 19:48:52.726228 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.726319 kubelet[2668]: E1008 19:48:52.726248 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.726749 kubelet[2668]: E1008 19:48:52.726711 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.726749 kubelet[2668]: W1008 19:48:52.726738 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.726846 kubelet[2668]: E1008 19:48:52.726757 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.740818 kubelet[2668]: E1008 19:48:52.740701 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.740818 kubelet[2668]: W1008 19:48:52.740754 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.741613 kubelet[2668]: E1008 19:48:52.740782 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.742221 kubelet[2668]: E1008 19:48:52.742109 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.742221 kubelet[2668]: W1008 19:48:52.742162 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.743970 kubelet[2668]: E1008 19:48:52.742587 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.744217 kubelet[2668]: E1008 19:48:52.744119 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.744217 kubelet[2668]: W1008 19:48:52.744143 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.744217 kubelet[2668]: E1008 19:48:52.744165 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.744686 kubelet[2668]: E1008 19:48:52.744660 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.744686 kubelet[2668]: W1008 19:48:52.744678 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.744834 kubelet[2668]: E1008 19:48:52.744759 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.744941 kubelet[2668]: E1008 19:48:52.744920 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.744941 kubelet[2668]: W1008 19:48:52.744932 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.745060 kubelet[2668]: E1008 19:48:52.744997 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.745792 kubelet[2668]: E1008 19:48:52.745092 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.745792 kubelet[2668]: W1008 19:48:52.745099 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.745792 kubelet[2668]: E1008 19:48:52.745117 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.745792 kubelet[2668]: E1008 19:48:52.745794 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.746083 kubelet[2668]: W1008 19:48:52.745808 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.746083 kubelet[2668]: E1008 19:48:52.745829 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.746083 kubelet[2668]: E1008 19:48:52.746028 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.746083 kubelet[2668]: W1008 19:48:52.746037 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.746867 kubelet[2668]: E1008 19:48:52.746124 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.747421 kubelet[2668]: E1008 19:48:52.747385 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.747421 kubelet[2668]: W1008 19:48:52.747411 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.748163 kubelet[2668]: E1008 19:48:52.748119 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.748347 kubelet[2668]: E1008 19:48:52.748299 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.748347 kubelet[2668]: W1008 19:48:52.748327 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.748486 kubelet[2668]: E1008 19:48:52.748416 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.748932 kubelet[2668]: E1008 19:48:52.748649 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.748932 kubelet[2668]: W1008 19:48:52.748667 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.748932 kubelet[2668]: E1008 19:48:52.748731 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.749158 kubelet[2668]: E1008 19:48:52.749015 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.749158 kubelet[2668]: W1008 19:48:52.749026 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.749158 kubelet[2668]: E1008 19:48:52.749039 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.749612 kubelet[2668]: E1008 19:48:52.749410 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.749612 kubelet[2668]: W1008 19:48:52.749422 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.749612 kubelet[2668]: E1008 19:48:52.749492 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.750559 kubelet[2668]: E1008 19:48:52.749834 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.750559 kubelet[2668]: W1008 19:48:52.749845 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.750559 kubelet[2668]: E1008 19:48:52.750355 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.752661 kubelet[2668]: E1008 19:48:52.752621 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.752861 kubelet[2668]: W1008 19:48:52.752751 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.752861 kubelet[2668]: E1008 19:48:52.752802 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.753303 kubelet[2668]: E1008 19:48:52.753196 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.753303 kubelet[2668]: W1008 19:48:52.753224 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.753609 kubelet[2668]: E1008 19:48:52.753562 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.753609 kubelet[2668]: W1008 19:48:52.753573 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.753785 kubelet[2668]: E1008 19:48:52.753680 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.754056 kubelet[2668]: E1008 19:48:52.753981 2668 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:52.754056 kubelet[2668]: W1008 19:48:52.753996 2668 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:52.754056 kubelet[2668]: E1008 19:48:52.754018 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.754365 kubelet[2668]: E1008 19:48:52.754326 2668 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:52.874069 containerd[1465]: time="2024-10-08T19:48:52.873638230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:52.874958 containerd[1465]: time="2024-10-08T19:48:52.874371157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:48:52.878203 containerd[1465]: time="2024-10-08T19:48:52.878074634Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:52.883500 containerd[1465]: time="2024-10-08T19:48:52.883453169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:52.884336 containerd[1465]: time="2024-10-08T19:48:52.884303817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.445238987s" Oct 8 19:48:52.884404 containerd[1465]: time="2024-10-08T19:48:52.884344978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:48:52.887314 containerd[1465]: time="2024-10-08T19:48:52.887264127Z" level=info msg="CreateContainer within sandbox \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:48:52.905203 containerd[1465]: time="2024-10-08T19:48:52.905130988Z" level=info msg="CreateContainer within sandbox \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737\"" Oct 8 19:48:52.907500 containerd[1465]: time="2024-10-08T19:48:52.906597203Z" level=info msg="StartContainer for \"6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737\"" Oct 8 19:48:52.950659 systemd[1]: Started cri-containerd-6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737.scope - libcontainer container 6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737. Oct 8 19:48:53.001677 containerd[1465]: time="2024-10-08T19:48:53.001601524Z" level=info msg="StartContainer for \"6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737\" returns successfully" Oct 8 19:48:53.047670 systemd[1]: cri-containerd-6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737.scope: Deactivated successfully. Oct 8 19:48:53.094715 containerd[1465]: time="2024-10-08T19:48:53.094548895Z" level=info msg="shim disconnected" id=6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737 namespace=k8s.io Oct 8 19:48:53.094715 containerd[1465]: time="2024-10-08T19:48:53.094699817Z" level=warning msg="cleaning up after shim disconnected" id=6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737 namespace=k8s.io Oct 8 19:48:53.094715 containerd[1465]: time="2024-10-08T19:48:53.094722177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:53.448973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737-rootfs.mount: Deactivated successfully. Oct 8 19:48:53.508889 kubelet[2668]: E1008 19:48:53.508810 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tsj5" podUID="9faa311e-d6bb-4ee4-9110-b3120539788f" Oct 8 19:48:53.514058 kubelet[2668]: I1008 19:48:53.514006 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6bbe0f8-890c-4b3b-a726-e81ddeb6458f" path="/var/lib/kubelet/pods/b6bbe0f8-890c-4b3b-a726-e81ddeb6458f/volumes" Oct 8 19:48:53.631765 containerd[1465]: time="2024-10-08T19:48:53.631716320Z" level=info msg="StopPodSandbox for \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\"" Oct 8 19:48:53.631765 containerd[1465]: time="2024-10-08T19:48:53.631764080Z" level=info msg="Container to stop \"6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:48:53.639007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1-shm.mount: Deactivated successfully. Oct 8 19:48:53.646661 systemd[1]: cri-containerd-da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1.scope: Deactivated successfully. Oct 8 19:48:53.670701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1-rootfs.mount: Deactivated successfully. Oct 8 19:48:53.682536 containerd[1465]: time="2024-10-08T19:48:53.682476309Z" level=info msg="shim disconnected" id=da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1 namespace=k8s.io Oct 8 19:48:53.684488 containerd[1465]: time="2024-10-08T19:48:53.684415608Z" level=warning msg="cleaning up after shim disconnected" id=da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1 namespace=k8s.io Oct 8 19:48:53.684488 containerd[1465]: time="2024-10-08T19:48:53.684465489Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:53.711688 containerd[1465]: time="2024-10-08T19:48:53.711233197Z" level=info msg="TearDown network for sandbox \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" successfully" Oct 8 19:48:53.711688 containerd[1465]: time="2024-10-08T19:48:53.711276797Z" level=info msg="StopPodSandbox for \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" returns successfully" Oct 8 19:48:53.756794 kubelet[2668]: I1008 19:48:53.756716 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-net-dir\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.756794 kubelet[2668]: I1008 19:48:53.756771 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-node-certs\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.757238 kubelet[2668]: I1008 19:48:53.757021 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.757938 kubelet[2668]: I1008 19:48:53.757368 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-bin-dir\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.757938 kubelet[2668]: I1008 19:48:53.757414 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-lib-modules\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.757938 kubelet[2668]: I1008 19:48:53.757454 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-tigera-ca-bundle\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.757938 kubelet[2668]: I1008 19:48:53.757477 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-xtables-lock\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.757938 kubelet[2668]: I1008 19:48:53.757502 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-lib-calico\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.757938 kubelet[2668]: I1008 19:48:53.757533 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-log-dir\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.758161 kubelet[2668]: I1008 19:48:53.757653 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-flexvol-driver-host\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.758161 kubelet[2668]: I1008 19:48:53.757678 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-run-calico\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.758161 kubelet[2668]: I1008 19:48:53.757793 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-policysync\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.758161 kubelet[2668]: I1008 19:48:53.757822 2668 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g89bl\" (UniqueName: \"kubernetes.io/projected/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-kube-api-access-g89bl\") pod \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\" (UID: \"1e19a671-2c1a-49eb-997d-fb6b4d6a409d\") " Oct 8 19:48:53.760446 kubelet[2668]: I1008 19:48:53.758339 2668 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-net-dir\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.760446 kubelet[2668]: I1008 19:48:53.758510 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.760446 kubelet[2668]: I1008 19:48:53.759533 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.760446 kubelet[2668]: I1008 19:48:53.759570 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.760446 kubelet[2668]: I1008 19:48:53.760016 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:48:53.760653 kubelet[2668]: I1008 19:48:53.760051 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.760653 kubelet[2668]: I1008 19:48:53.760071 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.760653 kubelet[2668]: I1008 19:48:53.760094 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.760653 kubelet[2668]: I1008 19:48:53.760346 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.760653 kubelet[2668]: I1008 19:48:53.760397 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-policysync" (OuterVolumeSpecName: "policysync") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:48:53.765017 kubelet[2668]: I1008 19:48:53.764962 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-node-certs" (OuterVolumeSpecName: "node-certs") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:48:53.766914 systemd[1]: var-lib-kubelet-pods-1e19a671\x2d2c1a\x2d49eb\x2d997d\x2dfb6b4d6a409d-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 8 19:48:53.769344 kubelet[2668]: I1008 19:48:53.769304 2668 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-kube-api-access-g89bl" (OuterVolumeSpecName: "kube-api-access-g89bl") pod "1e19a671-2c1a-49eb-997d-fb6b4d6a409d" (UID: "1e19a671-2c1a-49eb-997d-fb6b4d6a409d"). InnerVolumeSpecName "kube-api-access-g89bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:48:53.772333 systemd[1]: var-lib-kubelet-pods-1e19a671\x2d2c1a\x2d49eb\x2d997d\x2dfb6b4d6a409d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg89bl.mount: Deactivated successfully. Oct 8 19:48:53.859407 kubelet[2668]: I1008 19:48:53.859198 2668 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-policysync\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.859407 kubelet[2668]: I1008 19:48:53.859266 2668 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-g89bl\" (UniqueName: \"kubernetes.io/projected/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-kube-api-access-g89bl\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.859407 kubelet[2668]: I1008 19:48:53.859284 2668 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-lib-modules\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.859407 kubelet[2668]: I1008 19:48:53.859304 2668 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-tigera-ca-bundle\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.859407 kubelet[2668]: I1008 19:48:53.859319 2668 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-node-certs\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.859407 kubelet[2668]: I1008 19:48:53.859333 2668 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-bin-dir\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.859407 kubelet[2668]: I1008 19:48:53.859346 2668 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-xtables-lock\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.860118 kubelet[2668]: I1008 19:48:53.859360 2668 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-lib-calico\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.860118 kubelet[2668]: I1008 19:48:53.859997 2668 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-cni-log-dir\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.860118 kubelet[2668]: I1008 19:48:53.860023 2668 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-flexvol-driver-host\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:53.860118 kubelet[2668]: I1008 19:48:53.860058 2668 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1e19a671-2c1a-49eb-997d-fb6b4d6a409d-var-run-calico\") on node \"ci-4081-1-0-2-870ec424ae\" DevicePath \"\"" Oct 8 19:48:54.637698 kubelet[2668]: I1008 19:48:54.636853 2668 scope.go:117] "RemoveContainer" containerID="6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737" Oct 8 19:48:54.640961 containerd[1465]: time="2024-10-08T19:48:54.640921381Z" level=info msg="RemoveContainer for \"6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737\"" Oct 8 19:48:54.646877 systemd[1]: Removed slice kubepods-besteffort-pod1e19a671_2c1a_49eb_997d_fb6b4d6a409d.slice - libcontainer container kubepods-besteffort-pod1e19a671_2c1a_49eb_997d_fb6b4d6a409d.slice. Oct 8 19:48:54.648768 containerd[1465]: time="2024-10-08T19:48:54.648542377Z" level=info msg="RemoveContainer for \"6c691472d27a156c47c829981ca37f495f3e8f954cffc197e2d4fef1fc324737\" returns successfully" Oct 8 19:48:54.697443 kubelet[2668]: E1008 19:48:54.697339 2668 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1e19a671-2c1a-49eb-997d-fb6b4d6a409d" containerName="flexvol-driver" Oct 8 19:48:54.697443 kubelet[2668]: I1008 19:48:54.697414 2668 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e19a671-2c1a-49eb-997d-fb6b4d6a409d" containerName="flexvol-driver" Oct 8 19:48:54.711204 systemd[1]: Created slice kubepods-besteffort-pod8265c9a7_18d5_4963_acf0_ea56062f13cb.slice - libcontainer container kubepods-besteffort-pod8265c9a7_18d5_4963_acf0_ea56062f13cb.slice. Oct 8 19:48:54.764818 kubelet[2668]: I1008 19:48:54.764711 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-flexvol-driver-host\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.765602 kubelet[2668]: I1008 19:48:54.765216 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8265c9a7-18d5-4963-acf0-ea56062f13cb-node-certs\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.765602 kubelet[2668]: I1008 19:48:54.765308 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89znf\" (UniqueName: \"kubernetes.io/projected/8265c9a7-18d5-4963-acf0-ea56062f13cb-kube-api-access-89znf\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.765602 kubelet[2668]: I1008 19:48:54.765340 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-xtables-lock\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.765602 kubelet[2668]: I1008 19:48:54.765388 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-var-run-calico\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.765602 kubelet[2668]: I1008 19:48:54.765415 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-lib-modules\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.766027 kubelet[2668]: I1008 19:48:54.765488 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-policysync\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.766027 kubelet[2668]: I1008 19:48:54.765508 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8265c9a7-18d5-4963-acf0-ea56062f13cb-tigera-ca-bundle\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.766498 kubelet[2668]: I1008 19:48:54.766047 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-cni-bin-dir\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.766498 kubelet[2668]: I1008 19:48:54.766128 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-cni-log-dir\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.766498 kubelet[2668]: I1008 19:48:54.766151 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-var-lib-calico\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:54.766498 kubelet[2668]: I1008 19:48:54.766301 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8265c9a7-18d5-4963-acf0-ea56062f13cb-cni-net-dir\") pod \"calico-node-m4tct\" (UID: \"8265c9a7-18d5-4963-acf0-ea56062f13cb\") " pod="calico-system/calico-node-m4tct" Oct 8 19:48:55.017354 containerd[1465]: time="2024-10-08T19:48:55.017254160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4tct,Uid:8265c9a7-18d5-4963-acf0-ea56062f13cb,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:55.044825 containerd[1465]: time="2024-10-08T19:48:55.044683790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:55.044825 containerd[1465]: time="2024-10-08T19:48:55.044758711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:55.045025 containerd[1465]: time="2024-10-08T19:48:55.044867112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:55.047330 containerd[1465]: time="2024-10-08T19:48:55.047217295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:55.078726 systemd[1]: Started cri-containerd-050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3.scope - libcontainer container 050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3. Oct 8 19:48:55.111071 containerd[1465]: time="2024-10-08T19:48:55.110964684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m4tct,Uid:8265c9a7-18d5-4963-acf0-ea56062f13cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3\"" Oct 8 19:48:55.118926 containerd[1465]: time="2024-10-08T19:48:55.118869681Z" level=info msg="CreateContainer within sandbox \"050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:48:55.136790 containerd[1465]: time="2024-10-08T19:48:55.136727657Z" level=info msg="CreateContainer within sandbox \"050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10\"" Oct 8 19:48:55.138360 containerd[1465]: time="2024-10-08T19:48:55.137224822Z" level=info msg="StartContainer for \"041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10\"" Oct 8 19:48:55.173642 systemd[1]: Started cri-containerd-041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10.scope - libcontainer container 041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10. Oct 8 19:48:55.218839 containerd[1465]: time="2024-10-08T19:48:55.218783866Z" level=info msg="StartContainer for \"041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10\" returns successfully" Oct 8 19:48:55.237529 systemd[1]: cri-containerd-041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10.scope: Deactivated successfully. Oct 8 19:48:55.272748 containerd[1465]: time="2024-10-08T19:48:55.272495395Z" level=info msg="shim disconnected" id=041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10 namespace=k8s.io Oct 8 19:48:55.272748 containerd[1465]: time="2024-10-08T19:48:55.272614357Z" level=warning msg="cleaning up after shim disconnected" id=041c9e11dbcffb43e632fcfa9720d79d04d1a6bbeda38eccff5c85222a063d10 namespace=k8s.io Oct 8 19:48:55.272748 containerd[1465]: time="2024-10-08T19:48:55.272628997Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:55.509133 kubelet[2668]: E1008 19:48:55.508745 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tsj5" podUID="9faa311e-d6bb-4ee4-9110-b3120539788f" Oct 8 19:48:55.514741 kubelet[2668]: I1008 19:48:55.514678 2668 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e19a671-2c1a-49eb-997d-fb6b4d6a409d" path="/var/lib/kubelet/pods/1e19a671-2c1a-49eb-997d-fb6b4d6a409d/volumes" Oct 8 19:48:55.648443 containerd[1465]: time="2024-10-08T19:48:55.648147617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:48:57.507801 kubelet[2668]: E1008 19:48:57.507746 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tsj5" podUID="9faa311e-d6bb-4ee4-9110-b3120539788f" Oct 8 19:48:58.382039 containerd[1465]: time="2024-10-08T19:48:58.380904700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:58.382039 containerd[1465]: time="2024-10-08T19:48:58.381396345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:48:58.382039 containerd[1465]: time="2024-10-08T19:48:58.381984550Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:58.384607 containerd[1465]: time="2024-10-08T19:48:58.384559935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:58.385628 containerd[1465]: time="2024-10-08T19:48:58.385594865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.737392247s" Oct 8 19:48:58.385628 containerd[1465]: time="2024-10-08T19:48:58.385624865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:48:58.389551 containerd[1465]: time="2024-10-08T19:48:58.389517263Z" level=info msg="CreateContainer within sandbox \"050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:48:58.406727 containerd[1465]: time="2024-10-08T19:48:58.406593507Z" level=info msg="CreateContainer within sandbox \"050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef\"" Oct 8 19:48:58.408531 containerd[1465]: time="2024-10-08T19:48:58.407791879Z" level=info msg="StartContainer for \"9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef\"" Oct 8 19:48:58.445649 systemd[1]: Started cri-containerd-9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef.scope - libcontainer container 9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef. Oct 8 19:48:58.485726 containerd[1465]: time="2024-10-08T19:48:58.485685348Z" level=info msg="StartContainer for \"9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef\" returns successfully" Oct 8 19:48:58.929190 containerd[1465]: time="2024-10-08T19:48:58.929116174Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:48:58.933635 systemd[1]: cri-containerd-9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef.scope: Deactivated successfully. Oct 8 19:48:58.957393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef-rootfs.mount: Deactivated successfully. Oct 8 19:48:58.975999 kubelet[2668]: I1008 19:48:58.975939 2668 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 19:48:59.022263 systemd[1]: Created slice kubepods-burstable-podae763748_7e40_4f96_99c0_7ae5c334868c.slice - libcontainer container kubepods-burstable-podae763748_7e40_4f96_99c0_7ae5c334868c.slice. Oct 8 19:48:59.038419 systemd[1]: Created slice kubepods-burstable-podcb1fd1e2_74d5_48f1_a9cd_15caf208dc34.slice - libcontainer container kubepods-burstable-podcb1fd1e2_74d5_48f1_a9cd_15caf208dc34.slice. Oct 8 19:48:59.048096 systemd[1]: Created slice kubepods-besteffort-pod5c9902a2_1b86_4636_a819_11066fbd9eff.slice - libcontainer container kubepods-besteffort-pod5c9902a2_1b86_4636_a819_11066fbd9eff.slice. Oct 8 19:48:59.053986 containerd[1465]: time="2024-10-08T19:48:59.053925331Z" level=info msg="shim disconnected" id=9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef namespace=k8s.io Oct 8 19:48:59.053986 containerd[1465]: time="2024-10-08T19:48:59.053981211Z" level=warning msg="cleaning up after shim disconnected" id=9d3fc6223d5563d6f562834ef7b435145b7245d83d9d36740615d92be25e74ef namespace=k8s.io Oct 8 19:48:59.053986 containerd[1465]: time="2024-10-08T19:48:59.053991372Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:59.213815 kubelet[2668]: I1008 19:48:59.213610 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz6bd\" (UniqueName: \"kubernetes.io/projected/ae763748-7e40-4f96-99c0-7ae5c334868c-kube-api-access-bz6bd\") pod \"coredns-6f6b679f8f-46fsl\" (UID: \"ae763748-7e40-4f96-99c0-7ae5c334868c\") " pod="kube-system/coredns-6f6b679f8f-46fsl" Oct 8 19:48:59.213815 kubelet[2668]: I1008 19:48:59.213692 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5wgb\" (UniqueName: \"kubernetes.io/projected/cb1fd1e2-74d5-48f1-a9cd-15caf208dc34-kube-api-access-f5wgb\") pod \"coredns-6f6b679f8f-lj4l2\" (UID: \"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34\") " pod="kube-system/coredns-6f6b679f8f-lj4l2" Oct 8 19:48:59.213815 kubelet[2668]: I1008 19:48:59.213716 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lktk6\" (UniqueName: \"kubernetes.io/projected/5c9902a2-1b86-4636-a819-11066fbd9eff-kube-api-access-lktk6\") pod \"calico-kube-controllers-69d49cbd7d-qzw68\" (UID: \"5c9902a2-1b86-4636-a819-11066fbd9eff\") " pod="calico-system/calico-kube-controllers-69d49cbd7d-qzw68" Oct 8 19:48:59.214235 kubelet[2668]: I1008 19:48:59.214101 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb1fd1e2-74d5-48f1-a9cd-15caf208dc34-config-volume\") pod \"coredns-6f6b679f8f-lj4l2\" (UID: \"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34\") " pod="kube-system/coredns-6f6b679f8f-lj4l2" Oct 8 19:48:59.214235 kubelet[2668]: I1008 19:48:59.214136 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c9902a2-1b86-4636-a819-11066fbd9eff-tigera-ca-bundle\") pod \"calico-kube-controllers-69d49cbd7d-qzw68\" (UID: \"5c9902a2-1b86-4636-a819-11066fbd9eff\") " pod="calico-system/calico-kube-controllers-69d49cbd7d-qzw68" Oct 8 19:48:59.214235 kubelet[2668]: I1008 19:48:59.214182 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae763748-7e40-4f96-99c0-7ae5c334868c-config-volume\") pod \"coredns-6f6b679f8f-46fsl\" (UID: \"ae763748-7e40-4f96-99c0-7ae5c334868c\") " pod="kube-system/coredns-6f6b679f8f-46fsl" Oct 8 19:48:59.342054 containerd[1465]: time="2024-10-08T19:48:59.342011521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lj4l2,Uid:cb1fd1e2-74d5-48f1-a9cd-15caf208dc34,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:59.353198 containerd[1465]: time="2024-10-08T19:48:59.352884825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d49cbd7d-qzw68,Uid:5c9902a2-1b86-4636-a819-11066fbd9eff,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:59.516122 systemd[1]: Created slice kubepods-besteffort-pod9faa311e_d6bb_4ee4_9110_b3120539788f.slice - libcontainer container kubepods-besteffort-pod9faa311e_d6bb_4ee4_9110_b3120539788f.slice. Oct 8 19:48:59.520245 containerd[1465]: time="2024-10-08T19:48:59.520115062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tsj5,Uid:9faa311e-d6bb-4ee4-9110-b3120539788f,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:59.527364 containerd[1465]: time="2024-10-08T19:48:59.527239170Z" level=error msg="Failed to destroy network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.528764 containerd[1465]: time="2024-10-08T19:48:59.528707504Z" level=error msg="encountered an error cleaning up failed sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.529532 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17-shm.mount: Deactivated successfully. Oct 8 19:48:59.530511 containerd[1465]: time="2024-10-08T19:48:59.528779784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lj4l2,Uid:cb1fd1e2-74d5-48f1-a9cd-15caf208dc34,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.531243 kubelet[2668]: E1008 19:48:59.530802 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.531243 kubelet[2668]: E1008 19:48:59.530875 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lj4l2" Oct 8 19:48:59.531243 kubelet[2668]: E1008 19:48:59.531001 2668 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-lj4l2" Oct 8 19:48:59.531368 kubelet[2668]: E1008 19:48:59.531163 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-lj4l2_kube-system(cb1fd1e2-74d5-48f1-a9cd-15caf208dc34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-lj4l2_kube-system(cb1fd1e2-74d5-48f1-a9cd-15caf208dc34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lj4l2" podUID="cb1fd1e2-74d5-48f1-a9cd-15caf208dc34" Oct 8 19:48:59.537803 containerd[1465]: time="2024-10-08T19:48:59.537552308Z" level=error msg="Failed to destroy network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.540292 containerd[1465]: time="2024-10-08T19:48:59.540122133Z" level=error msg="encountered an error cleaning up failed sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.540292 containerd[1465]: time="2024-10-08T19:48:59.540192253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d49cbd7d-qzw68,Uid:5c9902a2-1b86-4636-a819-11066fbd9eff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.544112 kubelet[2668]: E1008 19:48:59.540639 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.544112 kubelet[2668]: E1008 19:48:59.540716 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69d49cbd7d-qzw68" Oct 8 19:48:59.544112 kubelet[2668]: E1008 19:48:59.540735 2668 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69d49cbd7d-qzw68" Oct 8 19:48:59.541081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8-shm.mount: Deactivated successfully. Oct 8 19:48:59.544604 kubelet[2668]: E1008 19:48:59.541855 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69d49cbd7d-qzw68_calico-system(5c9902a2-1b86-4636-a819-11066fbd9eff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69d49cbd7d-qzw68_calico-system(5c9902a2-1b86-4636-a819-11066fbd9eff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69d49cbd7d-qzw68" podUID="5c9902a2-1b86-4636-a819-11066fbd9eff" Oct 8 19:48:59.607041 containerd[1465]: time="2024-10-08T19:48:59.606912610Z" level=error msg="Failed to destroy network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.607524 containerd[1465]: time="2024-10-08T19:48:59.607407095Z" level=error msg="encountered an error cleaning up failed sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.607524 containerd[1465]: time="2024-10-08T19:48:59.607487776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tsj5,Uid:9faa311e-d6bb-4ee4-9110-b3120539788f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.608200 kubelet[2668]: E1008 19:48:59.607807 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.608200 kubelet[2668]: E1008 19:48:59.607888 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4tsj5" Oct 8 19:48:59.608200 kubelet[2668]: E1008 19:48:59.607908 2668 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4tsj5" Oct 8 19:48:59.608957 kubelet[2668]: E1008 19:48:59.607946 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4tsj5_calico-system(9faa311e-d6bb-4ee4-9110-b3120539788f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4tsj5_calico-system(9faa311e-d6bb-4ee4-9110-b3120539788f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4tsj5" podUID="9faa311e-d6bb-4ee4-9110-b3120539788f" Oct 8 19:48:59.627995 containerd[1465]: time="2024-10-08T19:48:59.627941611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-46fsl,Uid:ae763748-7e40-4f96-99c0-7ae5c334868c,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:59.660867 containerd[1465]: time="2024-10-08T19:48:59.660674644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:48:59.663413 kubelet[2668]: I1008 19:48:59.663274 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:48:59.665050 containerd[1465]: time="2024-10-08T19:48:59.664960045Z" level=info msg="StopPodSandbox for \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\"" Oct 8 19:48:59.665142 containerd[1465]: time="2024-10-08T19:48:59.665129006Z" level=info msg="Ensure that sandbox 8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257 in task-service has been cleanup successfully" Oct 8 19:48:59.671148 kubelet[2668]: I1008 19:48:59.671077 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:48:59.673732 containerd[1465]: time="2024-10-08T19:48:59.671662189Z" level=info msg="StopPodSandbox for \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\"" Oct 8 19:48:59.673732 containerd[1465]: time="2024-10-08T19:48:59.671829030Z" level=info msg="Ensure that sandbox f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8 in task-service has been cleanup successfully" Oct 8 19:48:59.676182 kubelet[2668]: I1008 19:48:59.675620 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:48:59.676546 containerd[1465]: time="2024-10-08T19:48:59.676508955Z" level=info msg="StopPodSandbox for \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\"" Oct 8 19:48:59.679455 containerd[1465]: time="2024-10-08T19:48:59.679395022Z" level=info msg="Ensure that sandbox 25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17 in task-service has been cleanup successfully" Oct 8 19:48:59.739768 containerd[1465]: time="2024-10-08T19:48:59.739702918Z" level=error msg="StopPodSandbox for \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\" failed" error="failed to destroy network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.740258 kubelet[2668]: E1008 19:48:59.739979 2668 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:48:59.740258 kubelet[2668]: E1008 19:48:59.740036 2668 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257"} Oct 8 19:48:59.740258 kubelet[2668]: E1008 19:48:59.740070 2668 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9faa311e-d6bb-4ee4-9110-b3120539788f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:48:59.740258 kubelet[2668]: E1008 19:48:59.740091 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9faa311e-d6bb-4ee4-9110-b3120539788f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4tsj5" podUID="9faa311e-d6bb-4ee4-9110-b3120539788f" Oct 8 19:48:59.746876 containerd[1465]: time="2024-10-08T19:48:59.746820546Z" level=error msg="StopPodSandbox for \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\" failed" error="failed to destroy network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.747181 kubelet[2668]: E1008 19:48:59.747145 2668 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:48:59.747300 kubelet[2668]: E1008 19:48:59.747283 2668 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17"} Oct 8 19:48:59.747448 kubelet[2668]: E1008 19:48:59.747378 2668 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:48:59.747448 kubelet[2668]: E1008 19:48:59.747408 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-lj4l2" podUID="cb1fd1e2-74d5-48f1-a9cd-15caf208dc34" Oct 8 19:48:59.750567 containerd[1465]: time="2024-10-08T19:48:59.750516061Z" level=error msg="StopPodSandbox for \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\" failed" error="failed to destroy network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.750881 kubelet[2668]: E1008 19:48:59.750845 2668 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:48:59.751072 kubelet[2668]: E1008 19:48:59.751050 2668 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8"} Oct 8 19:48:59.751169 kubelet[2668]: E1008 19:48:59.751155 2668 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c9902a2-1b86-4636-a819-11066fbd9eff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:48:59.751280 kubelet[2668]: E1008 19:48:59.751262 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c9902a2-1b86-4636-a819-11066fbd9eff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69d49cbd7d-qzw68" podUID="5c9902a2-1b86-4636-a819-11066fbd9eff" Oct 8 19:48:59.760928 containerd[1465]: time="2024-10-08T19:48:59.760867440Z" level=error msg="Failed to destroy network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.761349 containerd[1465]: time="2024-10-08T19:48:59.761296684Z" level=error msg="encountered an error cleaning up failed sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.762006 containerd[1465]: time="2024-10-08T19:48:59.761374285Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-46fsl,Uid:ae763748-7e40-4f96-99c0-7ae5c334868c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.762098 kubelet[2668]: E1008 19:48:59.761630 2668 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:59.762098 kubelet[2668]: E1008 19:48:59.761688 2668 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-46fsl" Oct 8 19:48:59.762098 kubelet[2668]: E1008 19:48:59.761713 2668 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-46fsl" Oct 8 19:48:59.762184 kubelet[2668]: E1008 19:48:59.761761 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-46fsl_kube-system(ae763748-7e40-4f96-99c0-7ae5c334868c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-46fsl_kube-system(ae763748-7e40-4f96-99c0-7ae5c334868c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-46fsl" podUID="ae763748-7e40-4f96-99c0-7ae5c334868c" Oct 8 19:49:00.402371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257-shm.mount: Deactivated successfully. Oct 8 19:49:00.679334 kubelet[2668]: I1008 19:49:00.679193 2668 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:00.681541 containerd[1465]: time="2024-10-08T19:49:00.680748054Z" level=info msg="StopPodSandbox for \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\"" Oct 8 19:49:00.681541 containerd[1465]: time="2024-10-08T19:49:00.681056817Z" level=info msg="Ensure that sandbox b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182 in task-service has been cleanup successfully" Oct 8 19:49:00.718381 containerd[1465]: time="2024-10-08T19:49:00.718254410Z" level=error msg="StopPodSandbox for \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\" failed" error="failed to destroy network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:49:00.718662 kubelet[2668]: E1008 19:49:00.718618 2668 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:00.718735 kubelet[2668]: E1008 19:49:00.718685 2668 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182"} Oct 8 19:49:00.718763 kubelet[2668]: E1008 19:49:00.718738 2668 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ae763748-7e40-4f96-99c0-7ae5c334868c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:49:00.718828 kubelet[2668]: E1008 19:49:00.718773 2668 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ae763748-7e40-4f96-99c0-7ae5c334868c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-46fsl" podUID="ae763748-7e40-4f96-99c0-7ae5c334868c" Oct 8 19:49:03.159209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706236743.mount: Deactivated successfully. Oct 8 19:49:03.187045 containerd[1465]: time="2024-10-08T19:49:03.186965002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:03.189565 containerd[1465]: time="2024-10-08T19:49:03.189336744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:49:03.189565 containerd[1465]: time="2024-10-08T19:49:03.189477065Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:03.194253 containerd[1465]: time="2024-10-08T19:49:03.194140669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:03.195188 containerd[1465]: time="2024-10-08T19:49:03.194748354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.53400903s" Oct 8 19:49:03.195188 containerd[1465]: time="2024-10-08T19:49:03.194789435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:49:03.213004 containerd[1465]: time="2024-10-08T19:49:03.212869322Z" level=info msg="CreateContainer within sandbox \"050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:49:03.230682 containerd[1465]: time="2024-10-08T19:49:03.229583398Z" level=info msg="CreateContainer within sandbox \"050cf02963797d06fdb070114b9e49c973f3261cb993220db276ff07f6623ac3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2fbd4c922b9f6bc1762c2adc2824df97073b7c904b99b24b219338c257a68480\"" Oct 8 19:49:03.230682 containerd[1465]: time="2024-10-08T19:49:03.230516166Z" level=info msg="StartContainer for \"2fbd4c922b9f6bc1762c2adc2824df97073b7c904b99b24b219338c257a68480\"" Oct 8 19:49:03.268780 systemd[1]: Started cri-containerd-2fbd4c922b9f6bc1762c2adc2824df97073b7c904b99b24b219338c257a68480.scope - libcontainer container 2fbd4c922b9f6bc1762c2adc2824df97073b7c904b99b24b219338c257a68480. Oct 8 19:49:03.307841 containerd[1465]: time="2024-10-08T19:49:03.307785963Z" level=info msg="StartContainer for \"2fbd4c922b9f6bc1762c2adc2824df97073b7c904b99b24b219338c257a68480\" returns successfully" Oct 8 19:49:03.388404 kubelet[2668]: I1008 19:49:03.388353 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:03.552692 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:49:03.552902 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:49:03.719728 kubelet[2668]: I1008 19:49:03.719336 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m4tct" podStartSLOduration=2.166531462 podStartE2EDuration="9.71931786s" podCreationTimestamp="2024-10-08 19:48:54 +0000 UTC" firstStartedPulling="2024-10-08 19:48:55.642979366 +0000 UTC m=+20.247634430" lastFinishedPulling="2024-10-08 19:49:03.195765804 +0000 UTC m=+27.800420828" observedRunningTime="2024-10-08 19:49:03.718414852 +0000 UTC m=+28.323069876" watchObservedRunningTime="2024-10-08 19:49:03.71931786 +0000 UTC m=+28.323972884" Oct 8 19:49:05.236045 kernel: bpftool[4120]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:49:05.467516 systemd-networkd[1364]: vxlan.calico: Link UP Oct 8 19:49:05.467526 systemd-networkd[1364]: vxlan.calico: Gained carrier Oct 8 19:49:06.820692 systemd-networkd[1364]: vxlan.calico: Gained IPv6LL Oct 8 19:49:11.511481 containerd[1465]: time="2024-10-08T19:49:11.510771220Z" level=info msg="StopPodSandbox for \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\"" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.593 [INFO][4230] k8s.go 608: Cleaning up netns ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.594 [INFO][4230] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" iface="eth0" netns="/var/run/netns/cni-efbf49ad-6119-2b5a-b603-fec8994d8082" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.594 [INFO][4230] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" iface="eth0" netns="/var/run/netns/cni-efbf49ad-6119-2b5a-b603-fec8994d8082" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.595 [INFO][4230] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" iface="eth0" netns="/var/run/netns/cni-efbf49ad-6119-2b5a-b603-fec8994d8082" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.595 [INFO][4230] k8s.go 615: Releasing IP address(es) ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.595 [INFO][4230] utils.go 188: Calico CNI releasing IP address ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.649 [INFO][4236] ipam_plugin.go 417: Releasing address using handleID ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.649 [INFO][4236] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.649 [INFO][4236] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.661 [WARNING][4236] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.661 [INFO][4236] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.662 [INFO][4236] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:11.667232 containerd[1465]: 2024-10-08 19:49:11.665 [INFO][4230] k8s.go 621: Teardown processing complete. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:11.669530 containerd[1465]: time="2024-10-08T19:49:11.667491603Z" level=info msg="TearDown network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\" successfully" Oct 8 19:49:11.669530 containerd[1465]: time="2024-10-08T19:49:11.669494741Z" level=info msg="StopPodSandbox for \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\" returns successfully" Oct 8 19:49:11.671350 containerd[1465]: time="2024-10-08T19:49:11.670145107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tsj5,Uid:9faa311e-d6bb-4ee4-9110-b3120539788f,Namespace:calico-system,Attempt:1,}" Oct 8 19:49:11.669771 systemd[1]: run-netns-cni\x2defbf49ad\x2d6119\x2d2b5a\x2db603\x2dfec8994d8082.mount: Deactivated successfully. Oct 8 19:49:11.819257 systemd-networkd[1364]: cali8711e3e231b: Link UP Oct 8 19:49:11.820587 systemd-networkd[1364]: cali8711e3e231b: Gained carrier Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.723 [INFO][4244] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0 csi-node-driver- calico-system 9faa311e-d6bb-4ee4-9110-b3120539788f 777 0 2024-10-08 19:48:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081-1-0-2-870ec424ae csi-node-driver-4tsj5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali8711e3e231b [] []}} ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.723 [INFO][4244] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.756 [INFO][4255] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" HandleID="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.769 [INFO][4255] ipam_plugin.go 270: Auto assigning IP ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" HandleID="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003163a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-2-870ec424ae", "pod":"csi-node-driver-4tsj5", "timestamp":"2024-10-08 19:49:11.75659119 +0000 UTC"}, Hostname:"ci-4081-1-0-2-870ec424ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.769 [INFO][4255] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.769 [INFO][4255] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.769 [INFO][4255] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-2-870ec424ae' Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.772 [INFO][4255] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.777 [INFO][4255] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.783 [INFO][4255] ipam.go 489: Trying affinity for 192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.786 [INFO][4255] ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.788 [INFO][4255] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.788 [INFO][4255] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.792 [INFO][4255] ipam.go 1685: Creating new handle: k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02 Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.801 [INFO][4255] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.809 [INFO][4255] ipam.go 1216: Successfully claimed IPs: [192.168.10.1/26] block=192.168.10.0/26 handle="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.809 [INFO][4255] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.1/26] handle="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.809 [INFO][4255] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:11.845404 containerd[1465]: 2024-10-08 19:49:11.809 [INFO][4255] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.1/26] IPv6=[] ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" HandleID="k8s-pod-network.d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.847539 containerd[1465]: 2024-10-08 19:49:11.812 [INFO][4244] k8s.go 386: Populated endpoint ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9faa311e-d6bb-4ee4-9110-b3120539788f", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"", Pod:"csi-node-driver-4tsj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8711e3e231b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:11.847539 containerd[1465]: 2024-10-08 19:49:11.812 [INFO][4244] k8s.go 387: Calico CNI using IPs: [192.168.10.1/32] ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.847539 containerd[1465]: 2024-10-08 19:49:11.813 [INFO][4244] dataplane_linux.go 68: Setting the host side veth name to cali8711e3e231b ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.847539 containerd[1465]: 2024-10-08 19:49:11.820 [INFO][4244] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.847539 containerd[1465]: 2024-10-08 19:49:11.822 [INFO][4244] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9faa311e-d6bb-4ee4-9110-b3120539788f", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02", Pod:"csi-node-driver-4tsj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8711e3e231b", MAC:"ce:4f:38:35:8c:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:11.847539 containerd[1465]: 2024-10-08 19:49:11.838 [INFO][4244] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02" Namespace="calico-system" Pod="csi-node-driver-4tsj5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:11.871341 containerd[1465]: time="2024-10-08T19:49:11.871021560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:11.871341 containerd[1465]: time="2024-10-08T19:49:11.871082400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:11.871341 containerd[1465]: time="2024-10-08T19:49:11.871116041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:11.871341 containerd[1465]: time="2024-10-08T19:49:11.871223001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:11.900841 systemd[1]: Started cri-containerd-d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02.scope - libcontainer container d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02. Oct 8 19:49:11.933703 containerd[1465]: time="2024-10-08T19:49:11.933650993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tsj5,Uid:9faa311e-d6bb-4ee4-9110-b3120539788f,Namespace:calico-system,Attempt:1,} returns sandbox id \"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02\"" Oct 8 19:49:11.937267 containerd[1465]: time="2024-10-08T19:49:11.936840821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:49:13.156614 systemd-networkd[1364]: cali8711e3e231b: Gained IPv6LL Oct 8 19:49:13.301757 containerd[1465]: time="2024-10-08T19:49:13.301705071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:13.302738 containerd[1465]: time="2024-10-08T19:49:13.302688320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:49:13.303799 containerd[1465]: time="2024-10-08T19:49:13.303745929Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:13.306521 containerd[1465]: time="2024-10-08T19:49:13.306462313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:13.308968 containerd[1465]: time="2024-10-08T19:49:13.308715292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.371821991s" Oct 8 19:49:13.308968 containerd[1465]: time="2024-10-08T19:49:13.308756373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:49:13.315773 containerd[1465]: time="2024-10-08T19:49:13.315730674Z" level=info msg="CreateContainer within sandbox \"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:49:13.336317 containerd[1465]: time="2024-10-08T19:49:13.336257133Z" level=info msg="CreateContainer within sandbox \"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"280e4084e4c09f98d7a3b5141a2c9ec42c6d415e83bdbcc5fb0af011461738f7\"" Oct 8 19:49:13.338174 containerd[1465]: time="2024-10-08T19:49:13.337922988Z" level=info msg="StartContainer for \"280e4084e4c09f98d7a3b5141a2c9ec42c6d415e83bdbcc5fb0af011461738f7\"" Oct 8 19:49:13.375640 systemd[1]: run-containerd-runc-k8s.io-280e4084e4c09f98d7a3b5141a2c9ec42c6d415e83bdbcc5fb0af011461738f7-runc.2Aux6W.mount: Deactivated successfully. Oct 8 19:49:13.384631 systemd[1]: Started cri-containerd-280e4084e4c09f98d7a3b5141a2c9ec42c6d415e83bdbcc5fb0af011461738f7.scope - libcontainer container 280e4084e4c09f98d7a3b5141a2c9ec42c6d415e83bdbcc5fb0af011461738f7. Oct 8 19:49:13.422175 containerd[1465]: time="2024-10-08T19:49:13.421557198Z" level=info msg="StartContainer for \"280e4084e4c09f98d7a3b5141a2c9ec42c6d415e83bdbcc5fb0af011461738f7\" returns successfully" Oct 8 19:49:13.424842 containerd[1465]: time="2024-10-08T19:49:13.424811826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:49:13.510123 containerd[1465]: time="2024-10-08T19:49:13.509951730Z" level=info msg="StopPodSandbox for \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\"" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.561 [INFO][4369] k8s.go 608: Cleaning up netns ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.562 [INFO][4369] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" iface="eth0" netns="/var/run/netns/cni-3bab6cf0-d174-ffa4-0449-abe4f6057b2f" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.562 [INFO][4369] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" iface="eth0" netns="/var/run/netns/cni-3bab6cf0-d174-ffa4-0449-abe4f6057b2f" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.563 [INFO][4369] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" iface="eth0" netns="/var/run/netns/cni-3bab6cf0-d174-ffa4-0449-abe4f6057b2f" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.563 [INFO][4369] k8s.go 615: Releasing IP address(es) ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.563 [INFO][4369] utils.go 188: Calico CNI releasing IP address ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.586 [INFO][4375] ipam_plugin.go 417: Releasing address using handleID ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.586 [INFO][4375] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.586 [INFO][4375] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.603 [WARNING][4375] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.603 [INFO][4375] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.606 [INFO][4375] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:13.610663 containerd[1465]: 2024-10-08 19:49:13.609 [INFO][4369] k8s.go 621: Teardown processing complete. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:13.611175 containerd[1465]: time="2024-10-08T19:49:13.611056452Z" level=info msg="TearDown network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\" successfully" Oct 8 19:49:13.611175 containerd[1465]: time="2024-10-08T19:49:13.611152053Z" level=info msg="StopPodSandbox for \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\" returns successfully" Oct 8 19:49:13.612513 containerd[1465]: time="2024-10-08T19:49:13.612052501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d49cbd7d-qzw68,Uid:5c9902a2-1b86-4636-a819-11066fbd9eff,Namespace:calico-system,Attempt:1,}" Oct 8 19:49:13.756213 systemd-networkd[1364]: cali06e600d3d6a: Link UP Oct 8 19:49:13.756903 systemd-networkd[1364]: cali06e600d3d6a: Gained carrier Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.665 [INFO][4382] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0 calico-kube-controllers-69d49cbd7d- calico-system 5c9902a2-1b86-4636-a819-11066fbd9eff 790 0 2024-10-08 19:48:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69d49cbd7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-1-0-2-870ec424ae calico-kube-controllers-69d49cbd7d-qzw68 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali06e600d3d6a [] []}} ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.665 [INFO][4382] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.697 [INFO][4392] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" HandleID="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.719 [INFO][4392] ipam_plugin.go 270: Auto assigning IP ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" HandleID="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316d70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-2-870ec424ae", "pod":"calico-kube-controllers-69d49cbd7d-qzw68", "timestamp":"2024-10-08 19:49:13.697753569 +0000 UTC"}, Hostname:"ci-4081-1-0-2-870ec424ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.719 [INFO][4392] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.719 [INFO][4392] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.719 [INFO][4392] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-2-870ec424ae' Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.722 [INFO][4392] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.727 [INFO][4392] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.733 [INFO][4392] ipam.go 489: Trying affinity for 192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.735 [INFO][4392] ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.737 [INFO][4392] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.737 [INFO][4392] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.739 [INFO][4392] ipam.go 1685: Creating new handle: k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44 Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.743 [INFO][4392] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.749 [INFO][4392] ipam.go 1216: Successfully claimed IPs: [192.168.10.2/26] block=192.168.10.0/26 handle="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.749 [INFO][4392] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.2/26] handle="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.750 [INFO][4392] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:13.772290 containerd[1465]: 2024-10-08 19:49:13.750 [INFO][4392] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.2/26] IPv6=[] ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" HandleID="k8s-pod-network.0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.773670 containerd[1465]: 2024-10-08 19:49:13.752 [INFO][4382] k8s.go 386: Populated endpoint ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0", GenerateName:"calico-kube-controllers-69d49cbd7d-", Namespace:"calico-system", SelfLink:"", UID:"5c9902a2-1b86-4636-a819-11066fbd9eff", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d49cbd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"", Pod:"calico-kube-controllers-69d49cbd7d-qzw68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06e600d3d6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:13.773670 containerd[1465]: 2024-10-08 19:49:13.752 [INFO][4382] k8s.go 387: Calico CNI using IPs: [192.168.10.2/32] ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.773670 containerd[1465]: 2024-10-08 19:49:13.752 [INFO][4382] dataplane_linux.go 68: Setting the host side veth name to cali06e600d3d6a ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.773670 containerd[1465]: 2024-10-08 19:49:13.754 [INFO][4382] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.773670 containerd[1465]: 2024-10-08 19:49:13.755 [INFO][4382] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0", GenerateName:"calico-kube-controllers-69d49cbd7d-", Namespace:"calico-system", SelfLink:"", UID:"5c9902a2-1b86-4636-a819-11066fbd9eff", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d49cbd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44", Pod:"calico-kube-controllers-69d49cbd7d-qzw68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06e600d3d6a", MAC:"7a:d0:12:f1:5e:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:13.773670 containerd[1465]: 2024-10-08 19:49:13.770 [INFO][4382] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44" Namespace="calico-system" Pod="calico-kube-controllers-69d49cbd7d-qzw68" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:13.792580 containerd[1465]: time="2024-10-08T19:49:13.792216274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:13.792580 containerd[1465]: time="2024-10-08T19:49:13.792331915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:13.792580 containerd[1465]: time="2024-10-08T19:49:13.792357475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:13.792860 containerd[1465]: time="2024-10-08T19:49:13.792560677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:13.811621 systemd[1]: Started cri-containerd-0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44.scope - libcontainer container 0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44. Oct 8 19:49:13.846314 containerd[1465]: time="2024-10-08T19:49:13.846276306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69d49cbd7d-qzw68,Uid:5c9902a2-1b86-4636-a819-11066fbd9eff,Namespace:calico-system,Attempt:1,} returns sandbox id \"0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44\"" Oct 8 19:49:14.336385 systemd[1]: run-netns-cni\x2d3bab6cf0\x2dd174\x2dffa4\x2d0449\x2dabe4f6057b2f.mount: Deactivated successfully. Oct 8 19:49:14.508294 containerd[1465]: time="2024-10-08T19:49:14.508194503Z" level=info msg="StopPodSandbox for \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\"" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.570 [INFO][4465] k8s.go 608: Cleaning up netns ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.571 [INFO][4465] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" iface="eth0" netns="/var/run/netns/cni-f8490038-a2c4-dc73-5fa4-b3a4e45b5049" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.571 [INFO][4465] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" iface="eth0" netns="/var/run/netns/cni-f8490038-a2c4-dc73-5fa4-b3a4e45b5049" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.571 [INFO][4465] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" iface="eth0" netns="/var/run/netns/cni-f8490038-a2c4-dc73-5fa4-b3a4e45b5049" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.571 [INFO][4465] k8s.go 615: Releasing IP address(es) ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.571 [INFO][4465] utils.go 188: Calico CNI releasing IP address ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.596 [INFO][4471] ipam_plugin.go 417: Releasing address using handleID ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.596 [INFO][4471] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.596 [INFO][4471] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.608 [WARNING][4471] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.608 [INFO][4471] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.610 [INFO][4471] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.615215 containerd[1465]: 2024-10-08 19:49:14.611 [INFO][4465] k8s.go 621: Teardown processing complete. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:14.614964 systemd[1]: run-netns-cni\x2df8490038\x2da2c4\x2ddc73\x2d5fa4\x2db3a4e45b5049.mount: Deactivated successfully. Oct 8 19:49:14.616268 containerd[1465]: time="2024-10-08T19:49:14.615737797Z" level=info msg="TearDown network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\" successfully" Oct 8 19:49:14.616268 containerd[1465]: time="2024-10-08T19:49:14.615769517Z" level=info msg="StopPodSandbox for \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\" returns successfully" Oct 8 19:49:14.617497 containerd[1465]: time="2024-10-08T19:49:14.617134649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-46fsl,Uid:ae763748-7e40-4f96-99c0-7ae5c334868c,Namespace:kube-system,Attempt:1,}" Oct 8 19:49:14.772389 systemd-networkd[1364]: cali9e5102db5b4: Link UP Oct 8 19:49:14.772945 systemd-networkd[1364]: cali9e5102db5b4: Gained carrier Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.672 [INFO][4478] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0 coredns-6f6b679f8f- kube-system ae763748-7e40-4f96-99c0-7ae5c334868c 798 0 2024-10-08 19:48:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-2-870ec424ae coredns-6f6b679f8f-46fsl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9e5102db5b4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.672 [INFO][4478] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.709 [INFO][4490] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" HandleID="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.723 [INFO][4490] ipam_plugin.go 270: Auto assigning IP ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" HandleID="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000509b00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-2-870ec424ae", "pod":"coredns-6f6b679f8f-46fsl", "timestamp":"2024-10-08 19:49:14.709187609 +0000 UTC"}, Hostname:"ci-4081-1-0-2-870ec424ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.723 [INFO][4490] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.723 [INFO][4490] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.723 [INFO][4490] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-2-870ec424ae' Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.725 [INFO][4490] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.730 [INFO][4490] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.736 [INFO][4490] ipam.go 489: Trying affinity for 192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.742 [INFO][4490] ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.747 [INFO][4490] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.747 [INFO][4490] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.750 [INFO][4490] ipam.go 1685: Creating new handle: k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.756 [INFO][4490] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.765 [INFO][4490] ipam.go 1216: Successfully claimed IPs: [192.168.10.3/26] block=192.168.10.0/26 handle="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.766 [INFO][4490] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.3/26] handle="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.766 [INFO][4490] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.796746 containerd[1465]: 2024-10-08 19:49:14.766 [INFO][4490] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.3/26] IPv6=[] ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" HandleID="k8s-pod-network.e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.797703 containerd[1465]: 2024-10-08 19:49:14.769 [INFO][4478] k8s.go 386: Populated endpoint ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ae763748-7e40-4f96-99c0-7ae5c334868c", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"", Pod:"coredns-6f6b679f8f-46fsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e5102db5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.797703 containerd[1465]: 2024-10-08 19:49:14.769 [INFO][4478] k8s.go 387: Calico CNI using IPs: [192.168.10.3/32] ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.797703 containerd[1465]: 2024-10-08 19:49:14.769 [INFO][4478] dataplane_linux.go 68: Setting the host side veth name to cali9e5102db5b4 ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.797703 containerd[1465]: 2024-10-08 19:49:14.771 [INFO][4478] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.797703 containerd[1465]: 2024-10-08 19:49:14.771 [INFO][4478] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ae763748-7e40-4f96-99c0-7ae5c334868c", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e", Pod:"coredns-6f6b679f8f-46fsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e5102db5b4", MAC:"ee:cf:bd:4e:b0:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.797703 containerd[1465]: 2024-10-08 19:49:14.793 [INFO][4478] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e" Namespace="kube-system" Pod="coredns-6f6b679f8f-46fsl" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:14.840819 containerd[1465]: time="2024-10-08T19:49:14.839386220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:14.842745 containerd[1465]: time="2024-10-08T19:49:14.841474038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:14.842745 containerd[1465]: time="2024-10-08T19:49:14.841499798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:14.842745 containerd[1465]: time="2024-10-08T19:49:14.841642079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:14.873509 systemd[1]: Started cri-containerd-e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e.scope - libcontainer container e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e. Oct 8 19:49:14.918614 containerd[1465]: time="2024-10-08T19:49:14.918570587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-46fsl,Uid:ae763748-7e40-4f96-99c0-7ae5c334868c,Namespace:kube-system,Attempt:1,} returns sandbox id \"e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e\"" Oct 8 19:49:14.924597 containerd[1465]: time="2024-10-08T19:49:14.924469879Z" level=info msg="CreateContainer within sandbox \"e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:49:14.948511 containerd[1465]: time="2024-10-08T19:49:14.948365286Z" level=info msg="CreateContainer within sandbox \"e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2bd7eecf46fd82cabdcd187a203ad70f5a8043bfe12fc407768069f4af949581\"" Oct 8 19:49:14.949614 containerd[1465]: time="2024-10-08T19:49:14.949457936Z" level=info msg="StartContainer for \"2bd7eecf46fd82cabdcd187a203ad70f5a8043bfe12fc407768069f4af949581\"" Oct 8 19:49:14.998738 systemd[1]: Started cri-containerd-2bd7eecf46fd82cabdcd187a203ad70f5a8043bfe12fc407768069f4af949581.scope - libcontainer container 2bd7eecf46fd82cabdcd187a203ad70f5a8043bfe12fc407768069f4af949581. Oct 8 19:49:15.046605 containerd[1465]: time="2024-10-08T19:49:15.046136254Z" level=info msg="StartContainer for \"2bd7eecf46fd82cabdcd187a203ad70f5a8043bfe12fc407768069f4af949581\" returns successfully" Oct 8 19:49:15.049538 containerd[1465]: time="2024-10-08T19:49:15.049450882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:15.052755 containerd[1465]: time="2024-10-08T19:49:15.052657950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:49:15.054657 containerd[1465]: time="2024-10-08T19:49:15.053238755Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:15.056483 containerd[1465]: time="2024-10-08T19:49:15.056213821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:15.057462 containerd[1465]: time="2024-10-08T19:49:15.056973227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.631987319s" Oct 8 19:49:15.057462 containerd[1465]: time="2024-10-08T19:49:15.057015908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:49:15.060316 containerd[1465]: time="2024-10-08T19:49:15.060277936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:49:15.065503 containerd[1465]: time="2024-10-08T19:49:15.065376140Z" level=info msg="CreateContainer within sandbox \"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:49:15.107720 containerd[1465]: time="2024-10-08T19:49:15.107542864Z" level=info msg="CreateContainer within sandbox \"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5414f4172c7ca73b5df39e56c12e80cde362c869f379dc09edc48df796b84938\"" Oct 8 19:49:15.112360 containerd[1465]: time="2024-10-08T19:49:15.110689691Z" level=info msg="StartContainer for \"5414f4172c7ca73b5df39e56c12e80cde362c869f379dc09edc48df796b84938\"" Oct 8 19:49:15.148652 systemd[1]: Started cri-containerd-5414f4172c7ca73b5df39e56c12e80cde362c869f379dc09edc48df796b84938.scope - libcontainer container 5414f4172c7ca73b5df39e56c12e80cde362c869f379dc09edc48df796b84938. Oct 8 19:49:15.197132 containerd[1465]: time="2024-10-08T19:49:15.196374672Z" level=info msg="StartContainer for \"5414f4172c7ca73b5df39e56c12e80cde362c869f379dc09edc48df796b84938\" returns successfully" Oct 8 19:49:15.512023 containerd[1465]: time="2024-10-08T19:49:15.510572347Z" level=info msg="StopPodSandbox for \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\"" Oct 8 19:49:15.588762 systemd-networkd[1364]: cali06e600d3d6a: Gained IPv6LL Oct 8 19:49:15.620473 kubelet[2668]: I1008 19:49:15.619897 2668 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:49:15.620473 kubelet[2668]: I1008 19:49:15.619942 2668 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.568 [INFO][4642] k8s.go 608: Cleaning up netns ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.568 [INFO][4642] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" iface="eth0" netns="/var/run/netns/cni-043ed159-f7ec-dd7d-6a50-8f29d42ca5a4" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.568 [INFO][4642] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" iface="eth0" netns="/var/run/netns/cni-043ed159-f7ec-dd7d-6a50-8f29d42ca5a4" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.568 [INFO][4642] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" iface="eth0" netns="/var/run/netns/cni-043ed159-f7ec-dd7d-6a50-8f29d42ca5a4" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.569 [INFO][4642] k8s.go 615: Releasing IP address(es) ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.569 [INFO][4642] utils.go 188: Calico CNI releasing IP address ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.596 [INFO][4649] ipam_plugin.go 417: Releasing address using handleID ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.596 [INFO][4649] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.597 [INFO][4649] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.609 [WARNING][4649] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.609 [INFO][4649] ipam_plugin.go 445: Releasing address using workloadID ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.616 [INFO][4649] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:15.632526 containerd[1465]: 2024-10-08 19:49:15.627 [INFO][4642] k8s.go 621: Teardown processing complete. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:15.632526 containerd[1465]: time="2024-10-08T19:49:15.631708194Z" level=info msg="TearDown network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\" successfully" Oct 8 19:49:15.632526 containerd[1465]: time="2024-10-08T19:49:15.631734314Z" level=info msg="StopPodSandbox for \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\" returns successfully" Oct 8 19:49:15.634188 systemd[1]: run-netns-cni\x2d043ed159\x2df7ec\x2ddd7d\x2d6a50\x2d8f29d42ca5a4.mount: Deactivated successfully. Oct 8 19:49:15.637720 containerd[1465]: time="2024-10-08T19:49:15.637595965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lj4l2,Uid:cb1fd1e2-74d5-48f1-a9cd-15caf208dc34,Namespace:kube-system,Attempt:1,}" Oct 8 19:49:15.765508 kubelet[2668]: I1008 19:49:15.765146 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-46fsl" podStartSLOduration=33.765130587 podStartE2EDuration="33.765130587s" podCreationTimestamp="2024-10-08 19:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:49:15.764956906 +0000 UTC m=+40.369611970" watchObservedRunningTime="2024-10-08 19:49:15.765130587 +0000 UTC m=+40.369785611" Oct 8 19:49:15.937135 systemd-networkd[1364]: cali4fbf733e840: Link UP Oct 8 19:49:15.938544 systemd-networkd[1364]: cali4fbf733e840: Gained carrier Oct 8 19:49:15.956221 kubelet[2668]: I1008 19:49:15.954871 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4tsj5" podStartSLOduration=23.830654385 podStartE2EDuration="26.954850387s" podCreationTimestamp="2024-10-08 19:48:49 +0000 UTC" firstStartedPulling="2024-10-08 19:49:11.935792731 +0000 UTC m=+36.540447755" lastFinishedPulling="2024-10-08 19:49:15.059988733 +0000 UTC m=+39.664643757" observedRunningTime="2024-10-08 19:49:15.809071327 +0000 UTC m=+40.413726351" watchObservedRunningTime="2024-10-08 19:49:15.954850387 +0000 UTC m=+40.559505411" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.711 [INFO][4655] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0 coredns-6f6b679f8f- kube-system cb1fd1e2-74d5-48f1-a9cd-15caf208dc34 812 0 2024-10-08 19:48:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-2-870ec424ae coredns-6f6b679f8f-lj4l2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4fbf733e840 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.711 [INFO][4655] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.767 [INFO][4671] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" HandleID="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.885 [INFO][4671] ipam_plugin.go 270: Auto assigning IP ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" HandleID="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-2-870ec424ae", "pod":"coredns-6f6b679f8f-lj4l2", "timestamp":"2024-10-08 19:49:15.767133204 +0000 UTC"}, Hostname:"ci-4081-1-0-2-870ec424ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.886 [INFO][4671] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.886 [INFO][4671] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.886 [INFO][4671] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-2-870ec424ae' Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.890 [INFO][4671] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.898 [INFO][4671] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.904 [INFO][4671] ipam.go 489: Trying affinity for 192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.906 [INFO][4671] ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.909 [INFO][4671] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.909 [INFO][4671] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.911 [INFO][4671] ipam.go 1685: Creating new handle: k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88 Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.916 [INFO][4671] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.931 [INFO][4671] ipam.go 1216: Successfully claimed IPs: [192.168.10.4/26] block=192.168.10.0/26 handle="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.931 [INFO][4671] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.4/26] handle="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.931 [INFO][4671] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:15.957984 containerd[1465]: 2024-10-08 19:49:15.931 [INFO][4671] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.4/26] IPv6=[] ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" HandleID="k8s-pod-network.b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.959246 containerd[1465]: 2024-10-08 19:49:15.933 [INFO][4655] k8s.go 386: Populated endpoint ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"", Pod:"coredns-6f6b679f8f-lj4l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fbf733e840", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:15.959246 containerd[1465]: 2024-10-08 19:49:15.933 [INFO][4655] k8s.go 387: Calico CNI using IPs: [192.168.10.4/32] ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.959246 containerd[1465]: 2024-10-08 19:49:15.933 [INFO][4655] dataplane_linux.go 68: Setting the host side veth name to cali4fbf733e840 ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.959246 containerd[1465]: 2024-10-08 19:49:15.937 [INFO][4655] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.959246 containerd[1465]: 2024-10-08 19:49:15.940 [INFO][4655] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88", Pod:"coredns-6f6b679f8f-lj4l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fbf733e840", MAC:"c6:18:57:1a:a7:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:15.959246 containerd[1465]: 2024-10-08 19:49:15.953 [INFO][4655] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88" Namespace="kube-system" Pod="coredns-6f6b679f8f-lj4l2" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:15.984735 containerd[1465]: time="2024-10-08T19:49:15.984581124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:15.985474 containerd[1465]: time="2024-10-08T19:49:15.984716005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:15.985566 containerd[1465]: time="2024-10-08T19:49:15.985469371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:15.985641 containerd[1465]: time="2024-10-08T19:49:15.985598652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:16.009839 systemd[1]: Started cri-containerd-b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88.scope - libcontainer container b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88. Oct 8 19:49:16.048564 containerd[1465]: time="2024-10-08T19:49:16.048363753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-lj4l2,Uid:cb1fd1e2-74d5-48f1-a9cd-15caf208dc34,Namespace:kube-system,Attempt:1,} returns sandbox id \"b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88\"" Oct 8 19:49:16.051831 containerd[1465]: time="2024-10-08T19:49:16.051783622Z" level=info msg="CreateContainer within sandbox \"b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:49:16.066933 containerd[1465]: time="2024-10-08T19:49:16.066860352Z" level=info msg="CreateContainer within sandbox \"b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fcee83352b8fad8bb00db4f0595af63fd57c6156cbb8ebabc4fefe4a5bbb029e\"" Oct 8 19:49:16.069490 containerd[1465]: time="2024-10-08T19:49:16.068069962Z" level=info msg="StartContainer for \"fcee83352b8fad8bb00db4f0595af63fd57c6156cbb8ebabc4fefe4a5bbb029e\"" Oct 8 19:49:16.103634 systemd[1]: Started cri-containerd-fcee83352b8fad8bb00db4f0595af63fd57c6156cbb8ebabc4fefe4a5bbb029e.scope - libcontainer container fcee83352b8fad8bb00db4f0595af63fd57c6156cbb8ebabc4fefe4a5bbb029e. Oct 8 19:49:16.135943 containerd[1465]: time="2024-10-08T19:49:16.135685704Z" level=info msg="StartContainer for \"fcee83352b8fad8bb00db4f0595af63fd57c6156cbb8ebabc4fefe4a5bbb029e\" returns successfully" Oct 8 19:49:16.289004 kubelet[2668]: I1008 19:49:16.288963 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:16.444668 systemd[1]: run-containerd-runc-k8s.io-2fbd4c922b9f6bc1762c2adc2824df97073b7c904b99b24b219338c257a68480-runc.wBfcHu.mount: Deactivated successfully. Oct 8 19:49:16.677671 systemd-networkd[1364]: cali9e5102db5b4: Gained IPv6LL Oct 8 19:49:16.816549 kubelet[2668]: I1008 19:49:16.816439 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-lj4l2" podStartSLOduration=34.816405957 podStartE2EDuration="34.816405957s" podCreationTimestamp="2024-10-08 19:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:49:16.801470429 +0000 UTC m=+41.406125453" watchObservedRunningTime="2024-10-08 19:49:16.816405957 +0000 UTC m=+41.421060981" Oct 8 19:49:17.766403 systemd-networkd[1364]: cali4fbf733e840: Gained IPv6LL Oct 8 19:49:17.896396 containerd[1465]: time="2024-10-08T19:49:17.896348967Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:17.898417 containerd[1465]: time="2024-10-08T19:49:17.898379424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:49:17.899660 containerd[1465]: time="2024-10-08T19:49:17.899608195Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:17.903326 containerd[1465]: time="2024-10-08T19:49:17.903273226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:17.904495 containerd[1465]: time="2024-10-08T19:49:17.904315395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 2.844000819s" Oct 8 19:49:17.904495 containerd[1465]: time="2024-10-08T19:49:17.904358915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:49:17.939011 containerd[1465]: time="2024-10-08T19:49:17.938739490Z" level=info msg="CreateContainer within sandbox \"0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:49:17.951669 containerd[1465]: time="2024-10-08T19:49:17.951541799Z" level=info msg="CreateContainer within sandbox \"0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"186efefc9ecefda2bd4c3a22c70fcb36148957010d80edfe4d55c42e80e2814f\"" Oct 8 19:49:17.954224 containerd[1465]: time="2024-10-08T19:49:17.953353655Z" level=info msg="StartContainer for \"186efefc9ecefda2bd4c3a22c70fcb36148957010d80edfe4d55c42e80e2814f\"" Oct 8 19:49:17.988759 systemd[1]: Started cri-containerd-186efefc9ecefda2bd4c3a22c70fcb36148957010d80edfe4d55c42e80e2814f.scope - libcontainer container 186efefc9ecefda2bd4c3a22c70fcb36148957010d80edfe4d55c42e80e2814f. Oct 8 19:49:18.036984 containerd[1465]: time="2024-10-08T19:49:18.036090001Z" level=info msg="StartContainer for \"186efefc9ecefda2bd4c3a22c70fcb36148957010d80edfe4d55c42e80e2814f\" returns successfully" Oct 8 19:49:18.906161 kubelet[2668]: I1008 19:49:18.905953 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69d49cbd7d-qzw68" podStartSLOduration=23.847804081 podStartE2EDuration="27.90593097s" podCreationTimestamp="2024-10-08 19:48:51 +0000 UTC" firstStartedPulling="2024-10-08 19:49:13.848383485 +0000 UTC m=+38.453038509" lastFinishedPulling="2024-10-08 19:49:17.906510374 +0000 UTC m=+42.511165398" observedRunningTime="2024-10-08 19:49:18.876770402 +0000 UTC m=+43.481425426" watchObservedRunningTime="2024-10-08 19:49:18.90593097 +0000 UTC m=+43.510585994" Oct 8 19:49:23.321905 systemd[1]: Created slice kubepods-besteffort-pod878d9fe1_e190_4057_ab15_2104a631e92c.slice - libcontainer container kubepods-besteffort-pod878d9fe1_e190_4057_ab15_2104a631e92c.slice. Oct 8 19:49:23.336632 systemd[1]: Created slice kubepods-besteffort-podbf5af98a_0d79_4263_befc_0f0149dda748.slice - libcontainer container kubepods-besteffort-podbf5af98a_0d79_4263_befc_0f0149dda748.slice. Oct 8 19:49:23.404665 kubelet[2668]: I1008 19:49:23.404553 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/878d9fe1-e190-4057-ab15-2104a631e92c-calico-apiserver-certs\") pod \"calico-apiserver-9c8d476bf-2zsk5\" (UID: \"878d9fe1-e190-4057-ab15-2104a631e92c\") " pod="calico-apiserver/calico-apiserver-9c8d476bf-2zsk5" Oct 8 19:49:23.404665 kubelet[2668]: I1008 19:49:23.404621 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvllc\" (UniqueName: \"kubernetes.io/projected/878d9fe1-e190-4057-ab15-2104a631e92c-kube-api-access-nvllc\") pod \"calico-apiserver-9c8d476bf-2zsk5\" (UID: \"878d9fe1-e190-4057-ab15-2104a631e92c\") " pod="calico-apiserver/calico-apiserver-9c8d476bf-2zsk5" Oct 8 19:49:23.505526 kubelet[2668]: I1008 19:49:23.505395 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trctl\" (UniqueName: \"kubernetes.io/projected/bf5af98a-0d79-4263-befc-0f0149dda748-kube-api-access-trctl\") pod \"calico-apiserver-9c8d476bf-g2cgf\" (UID: \"bf5af98a-0d79-4263-befc-0f0149dda748\") " pod="calico-apiserver/calico-apiserver-9c8d476bf-g2cgf" Oct 8 19:49:23.505526 kubelet[2668]: I1008 19:49:23.505563 2668 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf5af98a-0d79-4263-befc-0f0149dda748-calico-apiserver-certs\") pod \"calico-apiserver-9c8d476bf-g2cgf\" (UID: \"bf5af98a-0d79-4263-befc-0f0149dda748\") " pod="calico-apiserver/calico-apiserver-9c8d476bf-g2cgf" Oct 8 19:49:23.505526 kubelet[2668]: E1008 19:49:23.505696 2668 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:23.505526 kubelet[2668]: E1008 19:49:23.505802 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878d9fe1-e190-4057-ab15-2104a631e92c-calico-apiserver-certs podName:878d9fe1-e190-4057-ab15-2104a631e92c nodeName:}" failed. No retries permitted until 2024-10-08 19:49:24.005770442 +0000 UTC m=+48.610425466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/878d9fe1-e190-4057-ab15-2104a631e92c-calico-apiserver-certs") pod "calico-apiserver-9c8d476bf-2zsk5" (UID: "878d9fe1-e190-4057-ab15-2104a631e92c") : secret "calico-apiserver-certs" not found Oct 8 19:49:23.607381 kubelet[2668]: E1008 19:49:23.606897 2668 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:23.607381 kubelet[2668]: E1008 19:49:23.607007 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf5af98a-0d79-4263-befc-0f0149dda748-calico-apiserver-certs podName:bf5af98a-0d79-4263-befc-0f0149dda748 nodeName:}" failed. No retries permitted until 2024-10-08 19:49:24.106982886 +0000 UTC m=+48.711637950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/bf5af98a-0d79-4263-befc-0f0149dda748-calico-apiserver-certs") pod "calico-apiserver-9c8d476bf-g2cgf" (UID: "bf5af98a-0d79-4263-befc-0f0149dda748") : secret "calico-apiserver-certs" not found Oct 8 19:49:24.009632 kubelet[2668]: E1008 19:49:24.009307 2668 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:24.009632 kubelet[2668]: E1008 19:49:24.009398 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878d9fe1-e190-4057-ab15-2104a631e92c-calico-apiserver-certs podName:878d9fe1-e190-4057-ab15-2104a631e92c nodeName:}" failed. No retries permitted until 2024-10-08 19:49:25.009375999 +0000 UTC m=+49.614031023 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/878d9fe1-e190-4057-ab15-2104a631e92c-calico-apiserver-certs") pod "calico-apiserver-9c8d476bf-2zsk5" (UID: "878d9fe1-e190-4057-ab15-2104a631e92c") : secret "calico-apiserver-certs" not found Oct 8 19:49:24.110225 kubelet[2668]: E1008 19:49:24.110146 2668 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:24.110498 kubelet[2668]: E1008 19:49:24.110262 2668 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf5af98a-0d79-4263-befc-0f0149dda748-calico-apiserver-certs podName:bf5af98a-0d79-4263-befc-0f0149dda748 nodeName:}" failed. No retries permitted until 2024-10-08 19:49:25.110235717 +0000 UTC m=+49.714890781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/bf5af98a-0d79-4263-befc-0f0149dda748-calico-apiserver-certs") pod "calico-apiserver-9c8d476bf-g2cgf" (UID: "bf5af98a-0d79-4263-befc-0f0149dda748") : secret "calico-apiserver-certs" not found Oct 8 19:49:25.140091 containerd[1465]: time="2024-10-08T19:49:25.139537258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c8d476bf-2zsk5,Uid:878d9fe1-e190-4057-ab15-2104a631e92c,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:49:25.143645 containerd[1465]: time="2024-10-08T19:49:25.143468650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c8d476bf-g2cgf,Uid:bf5af98a-0d79-4263-befc-0f0149dda748,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:49:25.377693 systemd-networkd[1364]: califd4ab188d95: Link UP Oct 8 19:49:25.379185 systemd-networkd[1364]: califd4ab188d95: Gained carrier Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.256 [INFO][4899] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0 calico-apiserver-9c8d476bf- calico-apiserver 878d9fe1-e190-4057-ab15-2104a631e92c 910 0 2024-10-08 19:49:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c8d476bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-2-870ec424ae calico-apiserver-9c8d476bf-2zsk5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califd4ab188d95 [] []}} ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.256 [INFO][4899] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.305 [INFO][4922] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" HandleID="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.319 [INFO][4922] ipam_plugin.go 270: Auto assigning IP ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" HandleID="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030d670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-2-870ec424ae", "pod":"calico-apiserver-9c8d476bf-2zsk5", "timestamp":"2024-10-08 19:49:25.30546787 +0000 UTC"}, Hostname:"ci-4081-1-0-2-870ec424ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.319 [INFO][4922] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.320 [INFO][4922] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.320 [INFO][4922] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-2-870ec424ae' Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.328 [INFO][4922] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.338 [INFO][4922] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.345 [INFO][4922] ipam.go 489: Trying affinity for 192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.349 [INFO][4922] ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.354 [INFO][4922] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.354 [INFO][4922] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.356 [INFO][4922] ipam.go 1685: Creating new handle: k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1 Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.361 [INFO][4922] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.370 [INFO][4922] ipam.go 1216: Successfully claimed IPs: [192.168.10.5/26] block=192.168.10.0/26 handle="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.370 [INFO][4922] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.5/26] handle="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.370 [INFO][4922] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:25.402185 containerd[1465]: 2024-10-08 19:49:25.370 [INFO][4922] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.5/26] IPv6=[] ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" HandleID="k8s-pod-network.d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" Oct 8 19:49:25.404947 containerd[1465]: 2024-10-08 19:49:25.374 [INFO][4899] k8s.go 386: Populated endpoint ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0", GenerateName:"calico-apiserver-9c8d476bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"878d9fe1-e190-4057-ab15-2104a631e92c", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c8d476bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"", Pod:"calico-apiserver-9c8d476bf-2zsk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd4ab188d95", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:25.404947 containerd[1465]: 2024-10-08 19:49:25.374 [INFO][4899] k8s.go 387: Calico CNI using IPs: [192.168.10.5/32] ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" Oct 8 19:49:25.404947 containerd[1465]: 2024-10-08 19:49:25.374 [INFO][4899] dataplane_linux.go 68: Setting the host side veth name to califd4ab188d95 ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" Oct 8 19:49:25.404947 containerd[1465]: 2024-10-08 19:49:25.380 [INFO][4899] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" Oct 8 19:49:25.404947 containerd[1465]: 2024-10-08 19:49:25.380 [INFO][4899] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0", GenerateName:"calico-apiserver-9c8d476bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"878d9fe1-e190-4057-ab15-2104a631e92c", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c8d476bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1", Pod:"calico-apiserver-9c8d476bf-2zsk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califd4ab188d95", MAC:"b6:e2:8a:7e:3c:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:25.404947 containerd[1465]: 2024-10-08 19:49:25.399 [INFO][4899] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-2zsk5" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--2zsk5-eth0" Oct 8 19:49:25.444672 containerd[1465]: time="2024-10-08T19:49:25.444336058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:25.444672 containerd[1465]: time="2024-10-08T19:49:25.444505580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:25.444672 containerd[1465]: time="2024-10-08T19:49:25.444616581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:25.445632 containerd[1465]: time="2024-10-08T19:49:25.444922583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:25.475716 systemd[1]: Started cri-containerd-d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1.scope - libcontainer container d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1. Oct 8 19:49:25.488155 systemd-networkd[1364]: cali4b69bc42849: Link UP Oct 8 19:49:25.489397 systemd-networkd[1364]: cali4b69bc42849: Gained carrier Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.262 [INFO][4910] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0 calico-apiserver-9c8d476bf- calico-apiserver bf5af98a-0d79-4263-befc-0f0149dda748 912 0 2024-10-08 19:49:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9c8d476bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-2-870ec424ae calico-apiserver-9c8d476bf-g2cgf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b69bc42849 [] []}} ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.263 [INFO][4910] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.313 [INFO][4924] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" HandleID="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.337 [INFO][4924] ipam_plugin.go 270: Auto assigning IP ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" HandleID="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-2-870ec424ae", "pod":"calico-apiserver-9c8d476bf-g2cgf", "timestamp":"2024-10-08 19:49:25.313492416 +0000 UTC"}, Hostname:"ci-4081-1-0-2-870ec424ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.337 [INFO][4924] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.370 [INFO][4924] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.370 [INFO][4924] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-2-870ec424ae' Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.429 [INFO][4924] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.439 [INFO][4924] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.448 [INFO][4924] ipam.go 489: Trying affinity for 192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.452 [INFO][4924] ipam.go 155: Attempting to load block cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.456 [INFO][4924] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.10.0/26 host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.456 [INFO][4924] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.10.0/26 handle="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.458 [INFO][4924] ipam.go 1685: Creating new handle: k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.466 [INFO][4924] ipam.go 1203: Writing block in order to claim IPs block=192.168.10.0/26 handle="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.478 [INFO][4924] ipam.go 1216: Successfully claimed IPs: [192.168.10.6/26] block=192.168.10.0/26 handle="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.478 [INFO][4924] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.10.6/26] handle="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" host="ci-4081-1-0-2-870ec424ae" Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.478 [INFO][4924] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:25.506710 containerd[1465]: 2024-10-08 19:49:25.478 [INFO][4924] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.10.6/26] IPv6=[] ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" HandleID="k8s-pod-network.080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" Oct 8 19:49:25.507277 containerd[1465]: 2024-10-08 19:49:25.481 [INFO][4910] k8s.go 386: Populated endpoint ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0", GenerateName:"calico-apiserver-9c8d476bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf5af98a-0d79-4263-befc-0f0149dda748", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c8d476bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"", Pod:"calico-apiserver-9c8d476bf-g2cgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b69bc42849", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:25.507277 containerd[1465]: 2024-10-08 19:49:25.482 [INFO][4910] k8s.go 387: Calico CNI using IPs: [192.168.10.6/32] ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" Oct 8 19:49:25.507277 containerd[1465]: 2024-10-08 19:49:25.482 [INFO][4910] dataplane_linux.go 68: Setting the host side veth name to cali4b69bc42849 ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" Oct 8 19:49:25.507277 containerd[1465]: 2024-10-08 19:49:25.489 [INFO][4910] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" Oct 8 19:49:25.507277 containerd[1465]: 2024-10-08 19:49:25.489 [INFO][4910] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0", GenerateName:"calico-apiserver-9c8d476bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf5af98a-0d79-4263-befc-0f0149dda748", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9c8d476bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f", Pod:"calico-apiserver-9c8d476bf-g2cgf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b69bc42849", MAC:"36:15:08:38:40:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:25.507277 containerd[1465]: 2024-10-08 19:49:25.503 [INFO][4910] k8s.go 500: Wrote updated endpoint to datastore ContainerID="080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f" Namespace="calico-apiserver" Pod="calico-apiserver-9c8d476bf-g2cgf" WorkloadEndpoint="ci--4081--1--0--2--870ec424ae-k8s-calico--apiserver--9c8d476bf--g2cgf-eth0" Oct 8 19:49:25.552149 containerd[1465]: time="2024-10-08T19:49:25.551730987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:25.552149 containerd[1465]: time="2024-10-08T19:49:25.551794027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:25.552149 containerd[1465]: time="2024-10-08T19:49:25.551805627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:25.552149 containerd[1465]: time="2024-10-08T19:49:25.551878948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:25.567983 containerd[1465]: time="2024-10-08T19:49:25.567622038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c8d476bf-2zsk5,Uid:878d9fe1-e190-4057-ab15-2104a631e92c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1\"" Oct 8 19:49:25.573305 containerd[1465]: time="2024-10-08T19:49:25.573039123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:49:25.583654 systemd[1]: Started cri-containerd-080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f.scope - libcontainer container 080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f. Oct 8 19:49:25.630893 containerd[1465]: time="2024-10-08T19:49:25.630853681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9c8d476bf-g2cgf,Uid:bf5af98a-0d79-4263-befc-0f0149dda748,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f\"" Oct 8 19:49:26.660807 systemd-networkd[1364]: califd4ab188d95: Gained IPv6LL Oct 8 19:49:27.048276 systemd-networkd[1364]: cali4b69bc42849: Gained IPv6LL Oct 8 19:49:27.682814 containerd[1465]: time="2024-10-08T19:49:27.682770177Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:27.684913 containerd[1465]: time="2024-10-08T19:49:27.684829474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:49:27.686633 containerd[1465]: time="2024-10-08T19:49:27.686566289Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:27.689694 containerd[1465]: time="2024-10-08T19:49:27.689620074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.116535391s" Oct 8 19:49:27.689694 containerd[1465]: time="2024-10-08T19:49:27.689662434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:49:27.690612 containerd[1465]: time="2024-10-08T19:49:27.690482321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:27.692356 containerd[1465]: time="2024-10-08T19:49:27.692276496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:49:27.695615 containerd[1465]: time="2024-10-08T19:49:27.695291480Z" level=info msg="CreateContainer within sandbox \"d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:49:27.727105 containerd[1465]: time="2024-10-08T19:49:27.726482176Z" level=info msg="CreateContainer within sandbox \"d9ae7051bc1894091d10b8150fb97393ca507b4c26a37b3fec2a67f58966d5a1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"94221f1dbff4b07282a78c4ccfb803a3ab45831633b43ed88ae996b0328e3930\"" Oct 8 19:49:27.728168 containerd[1465]: time="2024-10-08T19:49:27.728137910Z" level=info msg="StartContainer for \"94221f1dbff4b07282a78c4ccfb803a3ab45831633b43ed88ae996b0328e3930\"" Oct 8 19:49:27.770180 systemd[1]: Started cri-containerd-94221f1dbff4b07282a78c4ccfb803a3ab45831633b43ed88ae996b0328e3930.scope - libcontainer container 94221f1dbff4b07282a78c4ccfb803a3ab45831633b43ed88ae996b0328e3930. Oct 8 19:49:27.843969 containerd[1465]: time="2024-10-08T19:49:27.843850140Z" level=info msg="StartContainer for \"94221f1dbff4b07282a78c4ccfb803a3ab45831633b43ed88ae996b0328e3930\" returns successfully" Oct 8 19:49:28.091421 containerd[1465]: time="2024-10-08T19:49:28.091366089Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:28.095059 containerd[1465]: time="2024-10-08T19:49:28.095012279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 19:49:28.098274 containerd[1465]: time="2024-10-08T19:49:28.098120744Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 405.800248ms" Oct 8 19:49:28.098274 containerd[1465]: time="2024-10-08T19:49:28.098273825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:49:28.102854 containerd[1465]: time="2024-10-08T19:49:28.102803422Z" level=info msg="CreateContainer within sandbox \"080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:49:28.126722 containerd[1465]: time="2024-10-08T19:49:28.126587777Z" level=info msg="CreateContainer within sandbox \"080e1cff4f1418be4eb6befbcbd187e8403ec1865a850b0c1853e47129581e3f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fdd45be6e99dc69335b9df46d1b22a79a1c951c7044615754ec97e624b539876\"" Oct 8 19:49:28.128363 containerd[1465]: time="2024-10-08T19:49:28.128322431Z" level=info msg="StartContainer for \"fdd45be6e99dc69335b9df46d1b22a79a1c951c7044615754ec97e624b539876\"" Oct 8 19:49:28.159660 systemd[1]: Started cri-containerd-fdd45be6e99dc69335b9df46d1b22a79a1c951c7044615754ec97e624b539876.scope - libcontainer container fdd45be6e99dc69335b9df46d1b22a79a1c951c7044615754ec97e624b539876. Oct 8 19:49:28.215400 containerd[1465]: time="2024-10-08T19:49:28.215212262Z" level=info msg="StartContainer for \"fdd45be6e99dc69335b9df46d1b22a79a1c951c7044615754ec97e624b539876\" returns successfully" Oct 8 19:49:28.882450 kubelet[2668]: I1008 19:49:28.880810 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:28.892782 kubelet[2668]: I1008 19:49:28.892017 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c8d476bf-2zsk5" podStartSLOduration=3.770577167 podStartE2EDuration="5.892000878s" podCreationTimestamp="2024-10-08 19:49:23 +0000 UTC" firstStartedPulling="2024-10-08 19:49:25.570671343 +0000 UTC m=+50.175326367" lastFinishedPulling="2024-10-08 19:49:27.692095054 +0000 UTC m=+52.296750078" observedRunningTime="2024-10-08 19:49:27.896100769 +0000 UTC m=+52.500755833" watchObservedRunningTime="2024-10-08 19:49:28.892000878 +0000 UTC m=+53.496655902" Oct 8 19:49:28.892782 kubelet[2668]: I1008 19:49:28.892504 2668 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9c8d476bf-g2cgf" podStartSLOduration=3.425604422 podStartE2EDuration="5.892493162s" podCreationTimestamp="2024-10-08 19:49:23 +0000 UTC" firstStartedPulling="2024-10-08 19:49:25.632644456 +0000 UTC m=+50.237299480" lastFinishedPulling="2024-10-08 19:49:28.099533196 +0000 UTC m=+52.704188220" observedRunningTime="2024-10-08 19:49:28.892145959 +0000 UTC m=+53.496800983" watchObservedRunningTime="2024-10-08 19:49:28.892493162 +0000 UTC m=+53.497148146" Oct 8 19:49:29.885769 kubelet[2668]: I1008 19:49:29.885710 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:35.546892 containerd[1465]: time="2024-10-08T19:49:35.546833609Z" level=info msg="StopPodSandbox for \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\"" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.604 [WARNING][5179] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0", GenerateName:"calico-kube-controllers-69d49cbd7d-", Namespace:"calico-system", SelfLink:"", UID:"5c9902a2-1b86-4636-a819-11066fbd9eff", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d49cbd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44", Pod:"calico-kube-controllers-69d49cbd7d-qzw68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06e600d3d6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.604 [INFO][5179] k8s.go 608: Cleaning up netns ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.604 [INFO][5179] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" iface="eth0" netns="" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.604 [INFO][5179] k8s.go 615: Releasing IP address(es) ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.604 [INFO][5179] utils.go 188: Calico CNI releasing IP address ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.629 [INFO][5187] ipam_plugin.go 417: Releasing address using handleID ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.630 [INFO][5187] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.630 [INFO][5187] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.639 [WARNING][5187] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.639 [INFO][5187] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.641 [INFO][5187] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:35.645586 containerd[1465]: 2024-10-08 19:49:35.643 [INFO][5179] k8s.go 621: Teardown processing complete. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.646241 containerd[1465]: time="2024-10-08T19:49:35.645631959Z" level=info msg="TearDown network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\" successfully" Oct 8 19:49:35.646241 containerd[1465]: time="2024-10-08T19:49:35.645671359Z" level=info msg="StopPodSandbox for \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\" returns successfully" Oct 8 19:49:35.646331 containerd[1465]: time="2024-10-08T19:49:35.646249004Z" level=info msg="RemovePodSandbox for \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\"" Oct 8 19:49:35.658531 containerd[1465]: time="2024-10-08T19:49:35.658439142Z" level=info msg="Forcibly stopping sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\"" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.708 [WARNING][5205] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0", GenerateName:"calico-kube-controllers-69d49cbd7d-", Namespace:"calico-system", SelfLink:"", UID:"5c9902a2-1b86-4636-a819-11066fbd9eff", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69d49cbd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"0f18c9fad61a56030fade684e5aebbd0b2e77d52c3dcfe3662fb883bca8b6d44", Pod:"calico-kube-controllers-69d49cbd7d-qzw68", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali06e600d3d6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.709 [INFO][5205] k8s.go 608: Cleaning up netns ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.709 [INFO][5205] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" iface="eth0" netns="" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.709 [INFO][5205] k8s.go 615: Releasing IP address(es) ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.709 [INFO][5205] utils.go 188: Calico CNI releasing IP address ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.730 [INFO][5212] ipam_plugin.go 417: Releasing address using handleID ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.731 [INFO][5212] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.731 [INFO][5212] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.742 [WARNING][5212] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.742 [INFO][5212] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" HandleID="k8s-pod-network.f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Workload="ci--4081--1--0--2--870ec424ae-k8s-calico--kube--controllers--69d49cbd7d--qzw68-eth0" Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.747 [INFO][5212] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:35.750301 containerd[1465]: 2024-10-08 19:49:35.748 [INFO][5205] k8s.go 621: Teardown processing complete. ContainerID="f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8" Oct 8 19:49:35.750301 containerd[1465]: time="2024-10-08T19:49:35.750288156Z" level=info msg="TearDown network for sandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\" successfully" Oct 8 19:49:35.753918 containerd[1465]: time="2024-10-08T19:49:35.753873665Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:35.754027 containerd[1465]: time="2024-10-08T19:49:35.753952146Z" level=info msg="RemovePodSandbox \"f4c6a2c8ff095d09cfba8695a48c86b83ad45387ab6f97800af933a8381401e8\" returns successfully" Oct 8 19:49:35.754756 containerd[1465]: time="2024-10-08T19:49:35.754730032Z" level=info msg="StopPodSandbox for \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\"" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.820 [WARNING][5230] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88", Pod:"coredns-6f6b679f8f-lj4l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fbf733e840", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.820 [INFO][5230] k8s.go 608: Cleaning up netns ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.820 [INFO][5230] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" iface="eth0" netns="" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.820 [INFO][5230] k8s.go 615: Releasing IP address(es) ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.821 [INFO][5230] utils.go 188: Calico CNI releasing IP address ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.847 [INFO][5237] ipam_plugin.go 417: Releasing address using handleID ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.847 [INFO][5237] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.847 [INFO][5237] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.860 [WARNING][5237] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.860 [INFO][5237] ipam_plugin.go 445: Releasing address using workloadID ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.863 [INFO][5237] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:35.867607 containerd[1465]: 2024-10-08 19:49:35.865 [INFO][5230] k8s.go 621: Teardown processing complete. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.869584 containerd[1465]: time="2024-10-08T19:49:35.867575855Z" level=info msg="TearDown network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\" successfully" Oct 8 19:49:35.869584 containerd[1465]: time="2024-10-08T19:49:35.869360389Z" level=info msg="StopPodSandbox for \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\" returns successfully" Oct 8 19:49:35.870861 containerd[1465]: time="2024-10-08T19:49:35.870694839Z" level=info msg="RemovePodSandbox for \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\"" Oct 8 19:49:35.870861 containerd[1465]: time="2024-10-08T19:49:35.870736120Z" level=info msg="Forcibly stopping sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\"" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.922 [WARNING][5255] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"cb1fd1e2-74d5-48f1-a9cd-15caf208dc34", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"b5f83cc70c1b146da4adbf226a010a125bcccc189d589f8039370f5db861df88", Pod:"coredns-6f6b679f8f-lj4l2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fbf733e840", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.922 [INFO][5255] k8s.go 608: Cleaning up netns ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.922 [INFO][5255] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" iface="eth0" netns="" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.922 [INFO][5255] k8s.go 615: Releasing IP address(es) ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.922 [INFO][5255] utils.go 188: Calico CNI releasing IP address ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.947 [INFO][5261] ipam_plugin.go 417: Releasing address using handleID ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.948 [INFO][5261] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.948 [INFO][5261] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.959 [WARNING][5261] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.959 [INFO][5261] ipam_plugin.go 445: Releasing address using workloadID ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" HandleID="k8s-pod-network.25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--lj4l2-eth0" Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.961 [INFO][5261] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:35.966061 containerd[1465]: 2024-10-08 19:49:35.963 [INFO][5255] k8s.go 621: Teardown processing complete. ContainerID="25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17" Oct 8 19:49:35.966576 containerd[1465]: time="2024-10-08T19:49:35.966138923Z" level=info msg="TearDown network for sandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\" successfully" Oct 8 19:49:35.971558 containerd[1465]: time="2024-10-08T19:49:35.971491606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:35.971680 containerd[1465]: time="2024-10-08T19:49:35.971594287Z" level=info msg="RemovePodSandbox \"25889146c0bc5db66648919c0fc10cbae5306303d07033627dcbedb725a47c17\" returns successfully" Oct 8 19:49:35.972447 containerd[1465]: time="2024-10-08T19:49:35.972203971Z" level=info msg="StopPodSandbox for \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\"" Oct 8 19:49:35.972447 containerd[1465]: time="2024-10-08T19:49:35.972352613Z" level=info msg="TearDown network for sandbox \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" successfully" Oct 8 19:49:35.972447 containerd[1465]: time="2024-10-08T19:49:35.972367133Z" level=info msg="StopPodSandbox for \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" returns successfully" Oct 8 19:49:35.973172 containerd[1465]: time="2024-10-08T19:49:35.973131899Z" level=info msg="RemovePodSandbox for \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\"" Oct 8 19:49:35.973371 containerd[1465]: time="2024-10-08T19:49:35.973305980Z" level=info msg="Forcibly stopping sandbox \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\"" Oct 8 19:49:35.973489 containerd[1465]: time="2024-10-08T19:49:35.973439421Z" level=info msg="TearDown network for sandbox \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" successfully" Oct 8 19:49:35.977540 containerd[1465]: time="2024-10-08T19:49:35.977471694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:35.977690 containerd[1465]: time="2024-10-08T19:49:35.977549174Z" level=info msg="RemovePodSandbox \"d324e005fc1d133a767234c1fe741eda2f96bd2fd254d4de90563717d6d87633\" returns successfully" Oct 8 19:49:35.978576 containerd[1465]: time="2024-10-08T19:49:35.978543022Z" level=info msg="StopPodSandbox for \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\"" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.029 [WARNING][5279] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9faa311e-d6bb-4ee4-9110-b3120539788f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02", Pod:"csi-node-driver-4tsj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8711e3e231b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.029 [INFO][5279] k8s.go 608: Cleaning up netns ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.029 [INFO][5279] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" iface="eth0" netns="" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.029 [INFO][5279] k8s.go 615: Releasing IP address(es) ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.029 [INFO][5279] utils.go 188: Calico CNI releasing IP address ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.050 [INFO][5285] ipam_plugin.go 417: Releasing address using handleID ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.050 [INFO][5285] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.050 [INFO][5285] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.060 [WARNING][5285] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.060 [INFO][5285] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.062 [INFO][5285] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:36.066294 containerd[1465]: 2024-10-08 19:49:36.064 [INFO][5279] k8s.go 621: Teardown processing complete. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.067559 containerd[1465]: time="2024-10-08T19:49:36.066338443Z" level=info msg="TearDown network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\" successfully" Oct 8 19:49:36.067559 containerd[1465]: time="2024-10-08T19:49:36.066380283Z" level=info msg="StopPodSandbox for \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\" returns successfully" Oct 8 19:49:36.067559 containerd[1465]: time="2024-10-08T19:49:36.066982408Z" level=info msg="RemovePodSandbox for \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\"" Oct 8 19:49:36.067559 containerd[1465]: time="2024-10-08T19:49:36.067011848Z" level=info msg="Forcibly stopping sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\"" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.121 [WARNING][5303] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9faa311e-d6bb-4ee4-9110-b3120539788f", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"d0b7633dd5aa836adabb0b0792e8aeee5a0ebfdb26996dc4213abfdbc5e40f02", Pod:"csi-node-driver-4tsj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.10.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali8711e3e231b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.121 [INFO][5303] k8s.go 608: Cleaning up netns ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.121 [INFO][5303] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" iface="eth0" netns="" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.121 [INFO][5303] k8s.go 615: Releasing IP address(es) ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.121 [INFO][5303] utils.go 188: Calico CNI releasing IP address ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.146 [INFO][5309] ipam_plugin.go 417: Releasing address using handleID ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.146 [INFO][5309] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.146 [INFO][5309] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.156 [WARNING][5309] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.156 [INFO][5309] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" HandleID="k8s-pod-network.8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Workload="ci--4081--1--0--2--870ec424ae-k8s-csi--node--driver--4tsj5-eth0" Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.159 [INFO][5309] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:36.165363 containerd[1465]: 2024-10-08 19:49:36.161 [INFO][5303] k8s.go 621: Teardown processing complete. ContainerID="8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257" Oct 8 19:49:36.167539 containerd[1465]: time="2024-10-08T19:49:36.166510722Z" level=info msg="TearDown network for sandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\" successfully" Oct 8 19:49:36.170961 containerd[1465]: time="2024-10-08T19:49:36.170888477Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:36.171230 containerd[1465]: time="2024-10-08T19:49:36.171201159Z" level=info msg="RemovePodSandbox \"8fcf2b30025bcb094fec38af50098fc20ad28625d2d7be4f28331a5153508257\" returns successfully" Oct 8 19:49:36.172308 containerd[1465]: time="2024-10-08T19:49:36.171940885Z" level=info msg="StopPodSandbox for \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\"" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.220 [WARNING][5327] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ae763748-7e40-4f96-99c0-7ae5c334868c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e", Pod:"coredns-6f6b679f8f-46fsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e5102db5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.220 [INFO][5327] k8s.go 608: Cleaning up netns ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.221 [INFO][5327] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" iface="eth0" netns="" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.221 [INFO][5327] k8s.go 615: Releasing IP address(es) ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.221 [INFO][5327] utils.go 188: Calico CNI releasing IP address ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.245 [INFO][5333] ipam_plugin.go 417: Releasing address using handleID ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.246 [INFO][5333] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.246 [INFO][5333] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.256 [WARNING][5333] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.256 [INFO][5333] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.258 [INFO][5333] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:36.262306 containerd[1465]: 2024-10-08 19:49:36.260 [INFO][5327] k8s.go 621: Teardown processing complete. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.263199 containerd[1465]: time="2024-10-08T19:49:36.262849530Z" level=info msg="TearDown network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\" successfully" Oct 8 19:49:36.263199 containerd[1465]: time="2024-10-08T19:49:36.262880571Z" level=info msg="StopPodSandbox for \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\" returns successfully" Oct 8 19:49:36.263949 containerd[1465]: time="2024-10-08T19:49:36.263571736Z" level=info msg="RemovePodSandbox for \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\"" Oct 8 19:49:36.263949 containerd[1465]: time="2024-10-08T19:49:36.263602496Z" level=info msg="Forcibly stopping sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\"" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.328 [WARNING][5351] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"ae763748-7e40-4f96-99c0-7ae5c334868c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-2-870ec424ae", ContainerID:"e6657d1817e1df27238e833b36053a58e52c858c20c05b259b6a0b00f00e8c8e", Pod:"coredns-6f6b679f8f-46fsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9e5102db5b4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.328 [INFO][5351] k8s.go 608: Cleaning up netns ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.328 [INFO][5351] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" iface="eth0" netns="" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.328 [INFO][5351] k8s.go 615: Releasing IP address(es) ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.328 [INFO][5351] utils.go 188: Calico CNI releasing IP address ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.352 [INFO][5357] ipam_plugin.go 417: Releasing address using handleID ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.352 [INFO][5357] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.352 [INFO][5357] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.362 [WARNING][5357] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.362 [INFO][5357] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" HandleID="k8s-pod-network.b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Workload="ci--4081--1--0--2--870ec424ae-k8s-coredns--6f6b679f8f--46fsl-eth0" Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.364 [INFO][5357] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:36.368569 containerd[1465]: 2024-10-08 19:49:36.366 [INFO][5351] k8s.go 621: Teardown processing complete. ContainerID="b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182" Oct 8 19:49:36.369283 containerd[1465]: time="2024-10-08T19:49:36.368680895Z" level=info msg="TearDown network for sandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\" successfully" Oct 8 19:49:36.372662 containerd[1465]: time="2024-10-08T19:49:36.372361324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:36.372662 containerd[1465]: time="2024-10-08T19:49:36.372504365Z" level=info msg="RemovePodSandbox \"b98697c309aae104e60421aa0f6d1ffcd1f235cf2f73959452cb1a2855659182\" returns successfully" Oct 8 19:49:36.373789 containerd[1465]: time="2024-10-08T19:49:36.373557694Z" level=info msg="StopPodSandbox for \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\"" Oct 8 19:49:36.374156 containerd[1465]: time="2024-10-08T19:49:36.373992857Z" level=info msg="TearDown network for sandbox \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" successfully" Oct 8 19:49:36.374156 containerd[1465]: time="2024-10-08T19:49:36.374030657Z" level=info msg="StopPodSandbox for \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" returns successfully" Oct 8 19:49:36.374789 containerd[1465]: time="2024-10-08T19:49:36.374753663Z" level=info msg="RemovePodSandbox for \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\"" Oct 8 19:49:36.374878 containerd[1465]: time="2024-10-08T19:49:36.374796783Z" level=info msg="Forcibly stopping sandbox \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\"" Oct 8 19:49:36.374878 containerd[1465]: time="2024-10-08T19:49:36.374865344Z" level=info msg="TearDown network for sandbox \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" successfully" Oct 8 19:49:36.378126 containerd[1465]: time="2024-10-08T19:49:36.378090090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:36.378225 containerd[1465]: time="2024-10-08T19:49:36.378154290Z" level=info msg="RemovePodSandbox \"da0fcb0d809d4601919e8dcbf5a16132f36323f448be2e41ca9467b442f68da1\" returns successfully" Oct 8 19:49:39.191539 kubelet[2668]: I1008 19:49:39.190881 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:41.531815 systemd[1]: run-containerd-runc-k8s.io-186efefc9ecefda2bd4c3a22c70fcb36148957010d80edfe4d55c42e80e2814f-runc.C4NOit.mount: Deactivated successfully. Oct 8 19:49:59.914853 kubelet[2668]: I1008 19:49:59.914458 2668 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:53:27.630808 systemd[1]: Started sshd@7-188.245.175.188:22-139.178.89.65:56262.service - OpenSSH per-connection server daemon (139.178.89.65:56262). Oct 8 19:53:28.628334 sshd[5951]: Accepted publickey for core from 139.178.89.65 port 56262 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:28.630937 sshd[5951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:28.638007 systemd-logind[1445]: New session 8 of user core. Oct 8 19:53:28.643638 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:53:29.429007 sshd[5951]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:29.433443 systemd[1]: sshd@7-188.245.175.188:22-139.178.89.65:56262.service: Deactivated successfully. Oct 8 19:53:29.436885 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:53:29.439721 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:53:29.441511 systemd-logind[1445]: Removed session 8. Oct 8 19:53:34.600240 systemd[1]: Started sshd@8-188.245.175.188:22-139.178.89.65:56274.service - OpenSSH per-connection server daemon (139.178.89.65:56274). Oct 8 19:53:35.564764 sshd[5998]: Accepted publickey for core from 139.178.89.65 port 56274 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:35.567809 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:35.577169 systemd-logind[1445]: New session 9 of user core. Oct 8 19:53:35.583722 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:53:36.306986 sshd[5998]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:36.311940 systemd[1]: sshd@8-188.245.175.188:22-139.178.89.65:56274.service: Deactivated successfully. Oct 8 19:53:36.316713 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:53:36.319892 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:53:36.321804 systemd-logind[1445]: Removed session 9. Oct 8 19:53:41.492251 systemd[1]: Started sshd@9-188.245.175.188:22-139.178.89.65:55830.service - OpenSSH per-connection server daemon (139.178.89.65:55830). Oct 8 19:53:42.492497 sshd[6023]: Accepted publickey for core from 139.178.89.65 port 55830 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:42.494664 sshd[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:42.500910 systemd-logind[1445]: New session 10 of user core. Oct 8 19:53:42.506737 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:53:43.276769 sshd[6023]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:43.280937 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:53:43.282507 systemd[1]: sshd@9-188.245.175.188:22-139.178.89.65:55830.service: Deactivated successfully. Oct 8 19:53:43.285526 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:53:43.287484 systemd-logind[1445]: Removed session 10. Oct 8 19:53:43.458869 systemd[1]: Started sshd@10-188.245.175.188:22-139.178.89.65:55844.service - OpenSSH per-connection server daemon (139.178.89.65:55844). Oct 8 19:53:44.441485 sshd[6057]: Accepted publickey for core from 139.178.89.65 port 55844 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:44.443615 sshd[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:44.448680 systemd-logind[1445]: New session 11 of user core. Oct 8 19:53:44.464818 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:53:45.251713 sshd[6057]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:45.257554 systemd[1]: sshd@10-188.245.175.188:22-139.178.89.65:55844.service: Deactivated successfully. Oct 8 19:53:45.261472 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:53:45.263170 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:53:45.264095 systemd-logind[1445]: Removed session 11. Oct 8 19:53:45.442059 systemd[1]: Started sshd@11-188.245.175.188:22-139.178.89.65:51028.service - OpenSSH per-connection server daemon (139.178.89.65:51028). Oct 8 19:53:46.441251 sshd[6068]: Accepted publickey for core from 139.178.89.65 port 51028 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:46.443367 sshd[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:46.450745 systemd-logind[1445]: New session 12 of user core. Oct 8 19:53:46.456071 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:53:47.227865 sshd[6068]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:47.234653 systemd[1]: sshd@11-188.245.175.188:22-139.178.89.65:51028.service: Deactivated successfully. Oct 8 19:53:47.240867 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:53:47.244594 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:53:47.247079 systemd-logind[1445]: Removed session 12. Oct 8 19:53:52.397815 systemd[1]: Started sshd@12-188.245.175.188:22-139.178.89.65:51042.service - OpenSSH per-connection server daemon (139.178.89.65:51042). Oct 8 19:53:53.349522 sshd[6119]: Accepted publickey for core from 139.178.89.65 port 51042 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:53.351881 sshd[6119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:53.359740 systemd-logind[1445]: New session 13 of user core. Oct 8 19:53:53.365694 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:53:54.088910 sshd[6119]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:54.095877 systemd[1]: sshd@12-188.245.175.188:22-139.178.89.65:51042.service: Deactivated successfully. Oct 8 19:53:54.099331 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:53:54.100632 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:53:54.101680 systemd-logind[1445]: Removed session 13. Oct 8 19:53:54.259874 systemd[1]: Started sshd@13-188.245.175.188:22-139.178.89.65:51058.service - OpenSSH per-connection server daemon (139.178.89.65:51058). Oct 8 19:53:55.214165 sshd[6132]: Accepted publickey for core from 139.178.89.65 port 51058 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:55.216240 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:55.222176 systemd-logind[1445]: New session 14 of user core. Oct 8 19:53:55.227730 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:53:56.210069 sshd[6132]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:56.215644 systemd[1]: sshd@13-188.245.175.188:22-139.178.89.65:51058.service: Deactivated successfully. Oct 8 19:53:56.219220 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:53:56.220635 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:53:56.222117 systemd-logind[1445]: Removed session 14. Oct 8 19:53:56.392104 systemd[1]: Started sshd@14-188.245.175.188:22-139.178.89.65:48094.service - OpenSSH per-connection server daemon (139.178.89.65:48094). Oct 8 19:53:57.398043 sshd[6144]: Accepted publickey for core from 139.178.89.65 port 48094 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:53:57.399605 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:53:57.404601 systemd-logind[1445]: New session 15 of user core. Oct 8 19:53:57.410725 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:54:00.074541 sshd[6144]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:00.079004 systemd[1]: sshd@14-188.245.175.188:22-139.178.89.65:48094.service: Deactivated successfully. Oct 8 19:54:00.081352 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:54:00.084686 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:54:00.086331 systemd-logind[1445]: Removed session 15. Oct 8 19:54:00.253883 systemd[1]: Started sshd@15-188.245.175.188:22-139.178.89.65:48096.service - OpenSSH per-connection server daemon (139.178.89.65:48096). Oct 8 19:54:01.245846 sshd[6167]: Accepted publickey for core from 139.178.89.65 port 48096 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:54:01.249799 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:01.256336 systemd-logind[1445]: New session 16 of user core. Oct 8 19:54:01.264896 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:54:02.159836 sshd[6167]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:02.167386 systemd[1]: sshd@15-188.245.175.188:22-139.178.89.65:48096.service: Deactivated successfully. Oct 8 19:54:02.170955 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:54:02.171963 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:54:02.174032 systemd-logind[1445]: Removed session 16. Oct 8 19:54:02.337836 systemd[1]: Started sshd@16-188.245.175.188:22-139.178.89.65:48098.service - OpenSSH per-connection server daemon (139.178.89.65:48098). Oct 8 19:54:03.332194 sshd[6198]: Accepted publickey for core from 139.178.89.65 port 48098 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:54:03.334472 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:03.340881 systemd-logind[1445]: New session 17 of user core. Oct 8 19:54:03.344685 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:54:04.102705 sshd[6198]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:04.106793 systemd[1]: sshd@16-188.245.175.188:22-139.178.89.65:48098.service: Deactivated successfully. Oct 8 19:54:04.109894 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:54:04.113326 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:54:04.116063 systemd-logind[1445]: Removed session 17. Oct 8 19:54:09.284036 systemd[1]: Started sshd@17-188.245.175.188:22-139.178.89.65:41968.service - OpenSSH per-connection server daemon (139.178.89.65:41968). Oct 8 19:54:10.279652 sshd[6218]: Accepted publickey for core from 139.178.89.65 port 41968 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:54:10.281823 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:10.288350 systemd-logind[1445]: New session 18 of user core. Oct 8 19:54:10.292697 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:54:11.041850 sshd[6218]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:11.047050 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:54:11.048047 systemd[1]: sshd@17-188.245.175.188:22-139.178.89.65:41968.service: Deactivated successfully. Oct 8 19:54:11.051083 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:54:11.053485 systemd-logind[1445]: Removed session 18. Oct 8 19:54:16.210806 systemd[1]: Started sshd@18-188.245.175.188:22-139.178.89.65:41608.service - OpenSSH per-connection server daemon (139.178.89.65:41608). Oct 8 19:54:17.178207 sshd[6233]: Accepted publickey for core from 139.178.89.65 port 41608 ssh2: RSA SHA256:FcMQ9ewYvQVD+MdYYKqDZrZLLKJM+ArOzyf29ubPns4 Oct 8 19:54:17.180613 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:54:17.186499 systemd-logind[1445]: New session 19 of user core. Oct 8 19:54:17.194708 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:54:17.919790 sshd[6233]: pam_unix(sshd:session): session closed for user core Oct 8 19:54:17.925385 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:54:17.925729 systemd[1]: sshd@18-188.245.175.188:22-139.178.89.65:41608.service: Deactivated successfully. Oct 8 19:54:17.928120 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:54:17.930979 systemd-logind[1445]: Removed session 19. Oct 8 19:54:32.926781 systemd[1]: cri-containerd-1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73.scope: Deactivated successfully. Oct 8 19:54:32.927064 systemd[1]: cri-containerd-1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73.scope: Consumed 8.643s CPU time. Oct 8 19:54:32.952119 containerd[1465]: time="2024-10-08T19:54:32.951829566Z" level=info msg="shim disconnected" id=1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73 namespace=k8s.io Oct 8 19:54:32.952119 containerd[1465]: time="2024-10-08T19:54:32.951903367Z" level=warning msg="cleaning up after shim disconnected" id=1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73 namespace=k8s.io Oct 8 19:54:32.952119 containerd[1465]: time="2024-10-08T19:54:32.951916607Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:32.955270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73-rootfs.mount: Deactivated successfully. Oct 8 19:54:33.079859 kubelet[2668]: E1008 19:54:33.079758 2668 controller.go:195] "Failed to update lease" err="Put \"https://188.245.175.188:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-2-870ec424ae?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 8 19:54:33.352317 kubelet[2668]: E1008 19:54:33.352148 2668 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45536->10.0.0.2:2379: read: connection timed out" Oct 8 19:54:33.668348 systemd[1]: cri-containerd-c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76.scope: Deactivated successfully. Oct 8 19:54:33.668734 systemd[1]: cri-containerd-c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76.scope: Consumed 5.817s CPU time, 20.1M memory peak, 0B memory swap peak. Oct 8 19:54:33.695001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76-rootfs.mount: Deactivated successfully. Oct 8 19:54:33.696452 containerd[1465]: time="2024-10-08T19:54:33.694547262Z" level=info msg="shim disconnected" id=c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76 namespace=k8s.io Oct 8 19:54:33.696452 containerd[1465]: time="2024-10-08T19:54:33.696412636Z" level=warning msg="cleaning up after shim disconnected" id=c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76 namespace=k8s.io Oct 8 19:54:33.696452 containerd[1465]: time="2024-10-08T19:54:33.696445516Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:33.707619 containerd[1465]: time="2024-10-08T19:54:33.707558602Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:54:33Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:54:33.726351 kubelet[2668]: I1008 19:54:33.726256 2668 scope.go:117] "RemoveContainer" containerID="c9c36c39e03b34bff84b0909e2e25aa58dd5b58f51abcfbe3c2afb04c68c8d76" Oct 8 19:54:33.731493 containerd[1465]: time="2024-10-08T19:54:33.731364746Z" level=info msg="CreateContainer within sandbox \"fb07da4510f63f3c5bef77d3cce72294f0dcdc4ecfc1dea80bf4f1ba02bea8b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 19:54:33.732763 kubelet[2668]: I1008 19:54:33.732402 2668 scope.go:117] "RemoveContainer" containerID="1a02bb5fca37b2ae4730abdbc259ed3997e9234932996a0451f94d2bdc75aa73" Oct 8 19:54:33.737344 containerd[1465]: time="2024-10-08T19:54:33.737300032Z" level=info msg="CreateContainer within sandbox \"d5858d2da00f297d7dad5135b91cdb2f0cfcf8fc5c24b7f075f3c36d215fe45e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 19:54:33.754715 containerd[1465]: time="2024-10-08T19:54:33.754655806Z" level=info msg="CreateContainer within sandbox \"fb07da4510f63f3c5bef77d3cce72294f0dcdc4ecfc1dea80bf4f1ba02bea8b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"342548c304bb661eaa24aef5b61eed4b705c5e005c9ad60099b49c7f8135d6cb\"" Oct 8 19:54:33.755230 containerd[1465]: time="2024-10-08T19:54:33.755201410Z" level=info msg="StartContainer for \"342548c304bb661eaa24aef5b61eed4b705c5e005c9ad60099b49c7f8135d6cb\"" Oct 8 19:54:33.755943 containerd[1465]: time="2024-10-08T19:54:33.755837735Z" level=info msg="CreateContainer within sandbox \"d5858d2da00f297d7dad5135b91cdb2f0cfcf8fc5c24b7f075f3c36d215fe45e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68\"" Oct 8 19:54:33.756713 containerd[1465]: time="2024-10-08T19:54:33.756678301Z" level=info msg="StartContainer for \"32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68\"" Oct 8 19:54:33.782652 systemd[1]: Started cri-containerd-32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68.scope - libcontainer container 32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68. Oct 8 19:54:33.803655 systemd[1]: Started cri-containerd-342548c304bb661eaa24aef5b61eed4b705c5e005c9ad60099b49c7f8135d6cb.scope - libcontainer container 342548c304bb661eaa24aef5b61eed4b705c5e005c9ad60099b49c7f8135d6cb. Oct 8 19:54:33.831693 containerd[1465]: time="2024-10-08T19:54:33.831640520Z" level=info msg="StartContainer for \"32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68\" returns successfully" Oct 8 19:54:33.855089 containerd[1465]: time="2024-10-08T19:54:33.855044461Z" level=info msg="StartContainer for \"342548c304bb661eaa24aef5b61eed4b705c5e005c9ad60099b49c7f8135d6cb\" returns successfully" Oct 8 19:54:37.456458 kubelet[2668]: E1008 19:54:37.446683 2668 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:45354->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-1-0-2-870ec424ae.17fc9258c6f2fa5b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-1-0-2-870ec424ae,UID:c27f2a069e417c76bd5c270eab222d6e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-2-870ec424ae,},FirstTimestamp:2024-10-08 19:54:26.996017755 +0000 UTC m=+351.600672819,LastTimestamp:2024-10-08 19:54:26.996017755 +0000 UTC m=+351.600672819,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-2-870ec424ae,}" Oct 8 19:54:38.951343 systemd[1]: cri-containerd-45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217.scope: Deactivated successfully. Oct 8 19:54:38.952564 systemd[1]: cri-containerd-45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217.scope: Consumed 2.500s CPU time, 16.1M memory peak, 0B memory swap peak. Oct 8 19:54:38.988909 containerd[1465]: time="2024-10-08T19:54:38.988685909Z" level=info msg="shim disconnected" id=45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217 namespace=k8s.io Oct 8 19:54:38.988909 containerd[1465]: time="2024-10-08T19:54:38.988744150Z" level=warning msg="cleaning up after shim disconnected" id=45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217 namespace=k8s.io Oct 8 19:54:38.988909 containerd[1465]: time="2024-10-08T19:54:38.988752310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:54:38.993390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45fdf572a4f3c84f8584a7c5193920ca01a6b26b8da5a0ecce9c6dc77d98a217-rootfs.mount: Deactivated successfully. Oct 8 19:54:38.997322 systemd[1]: cri-containerd-32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68.scope: Deactivated successfully. Oct 8 19:54:39.022209 containerd[1465]: time="2024-10-08T19:54:39.022149817Z" level=info msg="shim disconnected" id=32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68 namespace=k8s.io Oct 8 19:54:39.023478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68-rootfs.mount: Deactivated successfully. Oct 8 19:54:39.024863 containerd[1465]: time="2024-10-08T19:54:39.024650498Z" level=warning msg="cleaning up after shim disconnected" id=32e303ebd20dc32b5388b43a39b4dde923d37a666ba4dc0625cf128b0af84b68 namespace=k8s.io Oct 8 19:54:39.024863 containerd[1465]: time="2024-10-08T19:54:39.024698899Z" level=info msg="cleaning up dead shim" namespace=k8s.io