Oct 8 19:40:30.952822 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:40:30.952849 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:40:30.952860 kernel: KASLR enabled Oct 8 19:40:30.952866 kernel: efi: EFI v2.7 by EDK II Oct 8 19:40:30.952872 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x1347a1018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Oct 8 19:40:30.952879 kernel: random: crng init done Oct 8 19:40:30.952886 kernel: ACPI: Early table checksum verification disabled Oct 8 19:40:30.952893 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Oct 8 19:40:30.952899 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:40:30.952906 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952943 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952951 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952957 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952964 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952972 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952981 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952988 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.952995 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:30.953002 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:40:30.953009 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 8 19:40:30.953017 kernel: NUMA: Failed to initialise from firmware Oct 8 19:40:30.953023 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:40:30.953030 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Oct 8 19:40:30.953037 kernel: Zone ranges: Oct 8 19:40:30.953044 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:40:30.953051 kernel: DMA32 empty Oct 8 19:40:30.953060 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 8 19:40:30.953067 kernel: Movable zone start for each node Oct 8 19:40:30.953073 kernel: Early memory node ranges Oct 8 19:40:30.953080 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Oct 8 19:40:30.953087 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Oct 8 19:40:30.953094 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Oct 8 19:40:30.953101 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Oct 8 19:40:30.953108 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Oct 8 19:40:30.953115 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:40:30.953122 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 8 19:40:30.953129 kernel: psci: probing for conduit method from ACPI. Oct 8 19:40:30.953137 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:40:30.953144 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:40:30.953151 kernel: psci: Trusted OS migration not required Oct 8 19:40:30.953161 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:40:30.953169 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:40:30.953176 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:40:30.953185 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:40:30.953193 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:40:30.953201 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:40:30.953208 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:40:30.953215 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:40:30.953222 kernel: CPU features: detected: Spectre-v4 Oct 8 19:40:30.953230 kernel: CPU features: detected: Spectre-BHB Oct 8 19:40:30.953237 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:40:30.953244 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:40:30.953252 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:40:30.953259 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:40:30.953268 kernel: alternatives: applying boot alternatives Oct 8 19:40:30.953277 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:40:30.953284 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:40:30.953292 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:40:30.953299 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:40:30.953307 kernel: Fallback order for Node 0: 0 Oct 8 19:40:30.953314 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Oct 8 19:40:30.953321 kernel: Policy zone: Normal Oct 8 19:40:30.953329 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:40:30.953336 kernel: software IO TLB: area num 2. Oct 8 19:40:30.953343 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Oct 8 19:40:30.953353 kernel: Memory: 3881848K/4096000K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 214152K reserved, 0K cma-reserved) Oct 8 19:40:30.953360 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:40:30.953368 kernel: trace event string verifier disabled Oct 8 19:40:30.953375 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:40:30.953383 kernel: rcu: RCU event tracing is enabled. Oct 8 19:40:30.953391 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:40:30.953398 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:40:30.953406 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:40:30.953413 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:40:30.953421 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:40:30.953428 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:40:30.953437 kernel: GICv3: 256 SPIs implemented Oct 8 19:40:30.953444 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:40:30.953452 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:40:30.953459 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:40:30.953466 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:40:30.953477 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:40:30.953485 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:40:30.953492 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:40:30.953500 kernel: GICv3: using LPI property table @0x00000001000e0000 Oct 8 19:40:30.953507 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Oct 8 19:40:30.953515 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:40:30.953524 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:40:30.953531 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:40:30.953539 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:40:30.953546 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:40:30.953554 kernel: Console: colour dummy device 80x25 Oct 8 19:40:30.953562 kernel: ACPI: Core revision 20230628 Oct 8 19:40:30.953570 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:40:30.953577 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:40:30.953585 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:40:30.953592 kernel: SELinux: Initializing. Oct 8 19:40:30.953602 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:40:30.953614 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:40:30.953624 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:40:30.953636 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:40:30.953646 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:40:30.953655 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:40:30.953662 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:40:30.953670 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:40:30.953678 kernel: Remapping and enabling EFI services. Oct 8 19:40:30.953687 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:40:30.953695 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:40:30.953702 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:40:30.953710 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Oct 8 19:40:30.953718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:40:30.953726 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:40:30.953734 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:40:30.953742 kernel: SMP: Total of 2 processors activated. Oct 8 19:40:30.953754 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:40:30.953762 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:40:30.953771 kernel: CPU features: detected: Common not Private translations Oct 8 19:40:30.953780 kernel: CPU features: detected: CRC32 instructions Oct 8 19:40:30.953793 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:40:30.953803 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:40:30.953811 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:40:30.953819 kernel: CPU features: detected: Privileged Access Never Oct 8 19:40:30.953828 kernel: CPU features: detected: RAS Extension Support Oct 8 19:40:30.953836 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:40:30.953844 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:40:30.953854 kernel: alternatives: applying system-wide alternatives Oct 8 19:40:30.953863 kernel: devtmpfs: initialized Oct 8 19:40:30.953871 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:40:30.953879 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:40:30.953888 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:40:30.953896 kernel: SMBIOS 3.0.0 present. Oct 8 19:40:30.953904 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 8 19:40:30.953927 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:40:30.953936 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:40:30.953944 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:40:30.953952 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:40:30.953960 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:40:30.953968 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Oct 8 19:40:30.953976 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:40:30.953984 kernel: cpuidle: using governor menu Oct 8 19:40:30.953992 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:40:30.954002 kernel: ASID allocator initialised with 32768 entries Oct 8 19:40:30.954010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:40:30.954018 kernel: Serial: AMBA PL011 UART driver Oct 8 19:40:30.954026 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:40:30.954034 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:40:30.954042 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:40:30.954050 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:40:30.954058 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:40:30.954066 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:40:30.954076 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:40:30.954084 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:40:30.954093 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:40:30.954101 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:40:30.954109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:40:30.954117 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:40:30.954125 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:40:30.954133 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:40:30.954141 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:40:30.954150 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:40:30.954158 kernel: ACPI: Interpreter enabled Oct 8 19:40:30.954168 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:40:30.954176 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:40:30.954185 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:40:30.954193 kernel: printk: console [ttyAMA0] enabled Oct 8 19:40:30.954201 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:40:30.954369 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:40:30.954453 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:40:30.954525 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:40:30.954599 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:40:30.954689 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:40:30.954700 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:40:30.954708 kernel: PCI host bridge to bus 0000:00 Oct 8 19:40:30.954787 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:40:30.954853 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:40:30.954956 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:40:30.955027 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:40:30.955116 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:40:30.955201 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Oct 8 19:40:30.955274 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Oct 8 19:40:30.955346 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:40:30.955440 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.955517 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Oct 8 19:40:30.955598 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.955671 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Oct 8 19:40:30.955753 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.955825 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Oct 8 19:40:30.955906 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.955991 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Oct 8 19:40:30.956082 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.956157 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Oct 8 19:40:30.956235 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.956307 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Oct 8 19:40:30.956389 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.956465 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Oct 8 19:40:30.956543 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.956626 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Oct 8 19:40:30.956715 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:30.956788 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Oct 8 19:40:30.956878 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Oct 8 19:40:30.957246 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Oct 8 19:40:30.957344 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:40:30.957419 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Oct 8 19:40:30.957492 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:40:30.958047 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:40:30.958145 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 19:40:30.958248 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Oct 8 19:40:30.958364 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 19:40:30.958443 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Oct 8 19:40:30.958519 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Oct 8 19:40:30.958604 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 19:40:30.958684 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Oct 8 19:40:30.958768 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 19:40:30.958947 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Oct 8 19:40:30.959051 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 19:40:30.959128 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Oct 8 19:40:30.959203 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:40:30.959286 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:40:30.959366 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Oct 8 19:40:30.959441 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Oct 8 19:40:30.959515 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:40:30.959597 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 8 19:40:30.959672 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:40:30.959745 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:40:30.959821 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 8 19:40:30.959896 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 8 19:40:30.961137 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 8 19:40:30.961233 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 8 19:40:30.961306 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:40:30.961377 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:40:30.961453 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 8 19:40:30.961525 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 8 19:40:30.961596 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 8 19:40:30.961679 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 8 19:40:30.961751 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 8 19:40:30.961824 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Oct 8 19:40:30.961901 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 8 19:40:30.963091 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:40:30.963180 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:40:30.963262 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 19:40:30.963342 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:40:30.963415 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:40:30.963505 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 19:40:30.963578 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:40:30.963650 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:40:30.963729 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 19:40:30.963803 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:40:30.963876 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:40:30.964078 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Oct 8 19:40:30.964156 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:40:30.964229 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Oct 8 19:40:30.964303 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:40:30.964379 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Oct 8 19:40:30.964450 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:40:30.964531 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Oct 8 19:40:30.964604 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:40:30.964708 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Oct 8 19:40:30.964782 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:40:30.964853 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Oct 8 19:40:30.966052 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:40:30.966169 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Oct 8 19:40:30.966252 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:40:30.966326 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Oct 8 19:40:30.966399 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:40:30.966472 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Oct 8 19:40:30.966543 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:40:30.966619 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Oct 8 19:40:30.966690 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Oct 8 19:40:30.966767 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Oct 8 19:40:30.966837 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 19:40:30.966975 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Oct 8 19:40:30.967063 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 19:40:30.967137 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Oct 8 19:40:30.967212 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 19:40:30.967284 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Oct 8 19:40:30.967354 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 8 19:40:30.967446 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Oct 8 19:40:30.967517 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 8 19:40:30.967587 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Oct 8 19:40:30.967656 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 8 19:40:30.967727 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Oct 8 19:40:30.967797 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 8 19:40:30.967867 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Oct 8 19:40:30.969032 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 8 19:40:30.969129 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Oct 8 19:40:30.969201 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Oct 8 19:40:30.969277 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Oct 8 19:40:30.969356 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Oct 8 19:40:30.969430 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:40:30.969503 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Oct 8 19:40:30.969574 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 19:40:30.969644 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 8 19:40:30.969719 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 8 19:40:30.969789 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:40:30.969867 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Oct 8 19:40:30.969963 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 19:40:30.970044 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 8 19:40:30.970118 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 8 19:40:30.970192 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:40:30.970275 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:40:30.970353 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Oct 8 19:40:30.970429 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 19:40:30.970503 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 8 19:40:30.970577 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 8 19:40:30.970654 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:40:30.970736 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:40:30.970812 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 19:40:30.970886 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 8 19:40:30.970973 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 8 19:40:30.971049 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:40:30.971131 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Oct 8 19:40:30.971210 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 19:40:30.971287 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 8 19:40:30.971364 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 8 19:40:30.971438 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:40:30.971524 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Oct 8 19:40:30.971602 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Oct 8 19:40:30.971677 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 19:40:30.971753 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 8 19:40:30.971826 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 8 19:40:30.971901 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:40:30.974809 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Oct 8 19:40:30.974901 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Oct 8 19:40:30.975015 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Oct 8 19:40:30.975093 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 19:40:30.975166 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 8 19:40:30.975239 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 8 19:40:30.975310 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:40:30.975391 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 19:40:30.975467 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 8 19:40:30.975542 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 8 19:40:30.975613 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:40:30.975690 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 19:40:30.975761 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 8 19:40:30.975832 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 8 19:40:30.975903 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:40:30.976003 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:40:30.976067 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:40:30.976130 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:40:30.976212 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 8 19:40:30.976278 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 8 19:40:30.976343 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:40:30.976418 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 8 19:40:30.976489 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 8 19:40:30.976555 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:40:30.976653 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 8 19:40:30.976726 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 8 19:40:30.976793 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:40:30.976866 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 8 19:40:30.978028 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 8 19:40:30.978113 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:40:30.978193 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 8 19:40:30.978279 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 8 19:40:30.978348 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:40:30.978425 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 8 19:40:30.978496 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 8 19:40:30.978566 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:40:30.978640 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 8 19:40:30.978707 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 8 19:40:30.978776 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:40:30.978851 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 8 19:40:30.979979 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 8 19:40:30.980082 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:40:30.980170 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 8 19:40:30.980240 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 8 19:40:30.980306 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:40:30.980317 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:40:30.980332 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:40:30.980341 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:40:30.980350 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:40:30.980359 kernel: iommu: Default domain type: Translated Oct 8 19:40:30.980367 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:40:30.980376 kernel: efivars: Registered efivars operations Oct 8 19:40:30.980385 kernel: vgaarb: loaded Oct 8 19:40:30.980393 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:40:30.980402 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:40:30.980413 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:40:30.980421 kernel: pnp: PnP ACPI init Oct 8 19:40:30.980502 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:40:30.980516 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:40:30.980524 kernel: NET: Registered PF_INET protocol family Oct 8 19:40:30.980533 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:40:30.980542 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:40:30.980551 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:40:30.980561 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:40:30.980570 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:40:30.980579 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:40:30.980588 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:40:30.980596 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:40:30.980605 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:40:30.980747 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 8 19:40:30.980763 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:40:30.980771 kernel: kvm [1]: HYP mode not available Oct 8 19:40:30.980784 kernel: Initialise system trusted keyrings Oct 8 19:40:30.980793 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:40:30.980802 kernel: Key type asymmetric registered Oct 8 19:40:30.980810 kernel: Asymmetric key parser 'x509' registered Oct 8 19:40:30.980819 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:40:30.980827 kernel: io scheduler mq-deadline registered Oct 8 19:40:30.980836 kernel: io scheduler kyber registered Oct 8 19:40:30.980844 kernel: io scheduler bfq registered Oct 8 19:40:30.980853 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:40:30.982036 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 8 19:40:30.982138 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 8 19:40:30.982222 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.982319 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 8 19:40:30.982395 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 8 19:40:30.982468 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.982739 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 8 19:40:30.982825 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 8 19:40:30.982901 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.983047 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 8 19:40:30.983127 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 8 19:40:30.983204 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.983287 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 8 19:40:30.983366 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 8 19:40:30.983440 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.983518 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 8 19:40:30.983593 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 8 19:40:30.983671 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.983751 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 8 19:40:30.983827 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 8 19:40:30.983902 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.983996 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 8 19:40:30.984074 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 8 19:40:30.984148 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.984164 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 8 19:40:30.984239 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 8 19:40:30.984315 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 8 19:40:30.984392 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:30.984405 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:40:30.984414 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:40:30.984423 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:40:30.984504 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Oct 8 19:40:30.984607 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 8 19:40:30.984879 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 8 19:40:30.984894 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:40:30.984903 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:40:30.985016 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 8 19:40:30.985029 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 8 19:40:30.985038 kernel: thunder_xcv, ver 1.0 Oct 8 19:40:30.985045 kernel: thunder_bgx, ver 1.0 Oct 8 19:40:30.985058 kernel: nicpf, ver 1.0 Oct 8 19:40:30.985066 kernel: nicvf, ver 1.0 Oct 8 19:40:30.985145 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:40:30.985208 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:40:30 UTC (1728416430) Oct 8 19:40:30.985218 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:40:30.985226 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:40:30.985234 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:40:30.985242 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:40:30.985252 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:40:30.985260 kernel: Segment Routing with IPv6 Oct 8 19:40:30.985267 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:40:30.985275 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:40:30.985283 kernel: Key type dns_resolver registered Oct 8 19:40:30.985291 kernel: registered taskstats version 1 Oct 8 19:40:30.985299 kernel: Loading compiled-in X.509 certificates Oct 8 19:40:30.985307 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:40:30.985314 kernel: Key type .fscrypt registered Oct 8 19:40:30.985323 kernel: Key type fscrypt-provisioning registered Oct 8 19:40:30.985331 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:40:30.985339 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:40:30.985347 kernel: ima: No architecture policies found Oct 8 19:40:30.985354 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:40:30.985362 kernel: clk: Disabling unused clocks Oct 8 19:40:30.985370 kernel: Freeing unused kernel memory: 39104K Oct 8 19:40:30.985378 kernel: Run /init as init process Oct 8 19:40:30.985386 kernel: with arguments: Oct 8 19:40:30.985396 kernel: /init Oct 8 19:40:30.985403 kernel: with environment: Oct 8 19:40:30.985412 kernel: HOME=/ Oct 8 19:40:30.985419 kernel: TERM=linux Oct 8 19:40:30.985427 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:40:30.985436 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:40:30.985446 systemd[1]: Detected virtualization kvm. Oct 8 19:40:30.985456 systemd[1]: Detected architecture arm64. Oct 8 19:40:30.985464 systemd[1]: Running in initrd. Oct 8 19:40:30.985472 systemd[1]: No hostname configured, using default hostname. Oct 8 19:40:30.985480 systemd[1]: Hostname set to . Oct 8 19:40:30.985489 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:40:30.985497 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:40:30.985506 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:40:30.985514 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:40:30.985524 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:40:30.985533 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:40:30.985544 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:40:30.985552 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:40:30.985562 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:40:30.985571 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:40:30.985579 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:40:30.985589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:40:30.985597 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:40:30.985605 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:40:30.985614 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:40:30.985622 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:40:30.985630 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:40:30.985638 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:40:30.985647 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:40:30.985656 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:40:30.985666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:40:30.985674 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:40:30.985683 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:40:30.985691 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:40:30.985700 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:40:30.985708 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:40:30.985717 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:40:30.985725 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:40:30.985735 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:40:30.985744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:40:30.985752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:30.985760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:40:30.985788 systemd-journald[236]: Collecting audit messages is disabled. Oct 8 19:40:30.985811 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:40:30.985820 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:40:30.985829 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:40:30.985837 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:40:30.985847 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:40:30.985857 systemd-journald[236]: Journal started Oct 8 19:40:30.985876 systemd-journald[236]: Runtime Journal (/run/log/journal/c572822b6a7d41d594f4b7baef672757) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:40:30.952189 systemd-modules-load[237]: Inserted module 'overlay' Oct 8 19:40:30.987138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:30.990021 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:40:30.990648 systemd-modules-load[237]: Inserted module 'br_netfilter' Oct 8 19:40:30.991183 kernel: Bridge firewalling registered Oct 8 19:40:30.992304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:40:30.999139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:40:31.009320 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:40:31.012184 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:40:31.016242 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:40:31.031088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:40:31.037205 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:40:31.038099 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:40:31.046219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:40:31.047959 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:31.051362 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:40:31.078863 dracut-cmdline[272]: dracut-dracut-053 Oct 8 19:40:31.079220 systemd-resolved[270]: Positive Trust Anchors: Oct 8 19:40:31.079231 systemd-resolved[270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:40:31.079262 systemd-resolved[270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:40:31.084170 systemd-resolved[270]: Defaulting to hostname 'linux'. Oct 8 19:40:31.085686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:40:31.087078 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:40:31.088785 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:40:31.178982 kernel: SCSI subsystem initialized Oct 8 19:40:31.184954 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:40:31.192966 kernel: iscsi: registered transport (tcp) Oct 8 19:40:31.206027 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:40:31.206095 kernel: QLogic iSCSI HBA Driver Oct 8 19:40:31.251129 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:40:31.256107 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:40:31.274584 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:40:31.274654 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:40:31.274670 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:40:31.334989 kernel: raid6: neonx8 gen() 15599 MB/s Oct 8 19:40:31.351990 kernel: raid6: neonx4 gen() 15498 MB/s Oct 8 19:40:31.368959 kernel: raid6: neonx2 gen() 13133 MB/s Oct 8 19:40:31.385992 kernel: raid6: neonx1 gen() 10376 MB/s Oct 8 19:40:31.402974 kernel: raid6: int64x8 gen() 6919 MB/s Oct 8 19:40:31.419975 kernel: raid6: int64x4 gen() 7237 MB/s Oct 8 19:40:31.437299 kernel: raid6: int64x2 gen() 6046 MB/s Oct 8 19:40:31.453999 kernel: raid6: int64x1 gen() 4895 MB/s Oct 8 19:40:31.454087 kernel: raid6: using algorithm neonx8 gen() 15599 MB/s Oct 8 19:40:31.471002 kernel: raid6: .... xor() 11745 MB/s, rmw enabled Oct 8 19:40:31.471093 kernel: raid6: using neon recovery algorithm Oct 8 19:40:31.476168 kernel: xor: measuring software checksum speed Oct 8 19:40:31.476243 kernel: 8regs : 19745 MB/sec Oct 8 19:40:31.476267 kernel: 32regs : 19679 MB/sec Oct 8 19:40:31.476289 kernel: arm64_neon : 26831 MB/sec Oct 8 19:40:31.476958 kernel: xor: using function: arm64_neon (26831 MB/sec) Oct 8 19:40:31.528964 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:40:31.547959 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:40:31.554182 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:40:31.585350 systemd-udevd[455]: Using default interface naming scheme 'v255'. Oct 8 19:40:31.588967 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:40:31.599484 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:40:31.617520 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Oct 8 19:40:31.651204 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:40:31.657141 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:40:31.708134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:40:31.715617 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:40:31.740661 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:40:31.742754 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:40:31.745084 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:40:31.746572 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:40:31.753155 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:40:31.778558 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:40:31.819366 kernel: scsi host0: Virtio SCSI HBA Oct 8 19:40:31.869009 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:40:31.871941 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 19:40:31.875334 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:40:31.875517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:31.879520 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:40:31.883536 kernel: ACPI: bus type USB registered Oct 8 19:40:31.883562 kernel: usbcore: registered new interface driver usbfs Oct 8 19:40:31.880707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:40:31.880875 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:31.886968 kernel: usbcore: registered new interface driver hub Oct 8 19:40:31.884074 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:31.888951 kernel: usbcore: registered new device driver usb Oct 8 19:40:31.890545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:31.910131 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 8 19:40:31.910088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:31.913298 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 8 19:40:31.913475 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:40:31.914962 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:40:31.920137 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:40:31.939740 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:40:31.939967 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 19:40:31.942848 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 19:40:31.945341 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:40:31.945538 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 19:40:31.946929 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 19:40:31.947951 kernel: hub 1-0:1.0: USB hub found Oct 8 19:40:31.948932 kernel: hub 1-0:1.0: 4 ports detected Oct 8 19:40:31.949938 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 19:40:31.952093 kernel: hub 2-0:1.0: USB hub found Oct 8 19:40:31.952589 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 8 19:40:31.952733 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 19:40:31.952817 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 8 19:40:31.952896 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 8 19:40:31.954428 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 19:40:31.954989 kernel: hub 2-0:1.0: 4 ports detected Oct 8 19:40:31.954529 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:31.960116 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:40:31.960142 kernel: GPT:17805311 != 80003071 Oct 8 19:40:31.960152 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:40:31.960162 kernel: GPT:17805311 != 80003071 Oct 8 19:40:31.960170 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:40:31.960179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:31.960189 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 8 19:40:32.004945 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (501) Oct 8 19:40:32.008621 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (510) Oct 8 19:40:32.007948 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 19:40:32.019658 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 19:40:32.033977 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:40:32.043278 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 19:40:32.043942 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 19:40:32.051112 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:40:32.059293 disk-uuid[572]: Primary Header is updated. Oct 8 19:40:32.059293 disk-uuid[572]: Secondary Entries is updated. Oct 8 19:40:32.059293 disk-uuid[572]: Secondary Header is updated. Oct 8 19:40:32.068699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:32.073959 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:32.078941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:32.190013 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 19:40:32.333195 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 8 19:40:32.333272 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 19:40:32.333565 kernel: usbcore: registered new interface driver usbhid Oct 8 19:40:32.333588 kernel: usbhid: USB HID core driver Oct 8 19:40:32.432054 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 8 19:40:32.560965 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 8 19:40:32.613961 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 8 19:40:33.082495 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:33.085498 disk-uuid[573]: The operation has completed successfully. Oct 8 19:40:33.140901 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:40:33.141037 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:40:33.158755 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:40:33.162820 sh[590]: Success Oct 8 19:40:33.176962 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:40:33.230366 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:40:33.238065 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:40:33.244391 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:40:33.261924 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:40:33.261980 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:33.261992 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:40:33.262504 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:40:33.264198 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:40:33.270949 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:40:33.273730 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:40:33.274451 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:40:33.284235 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:40:33.287285 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:40:33.303947 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:33.304026 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:33.304039 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:40:33.309045 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:40:33.309116 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:40:33.320070 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:40:33.321935 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:33.330301 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:40:33.339928 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:40:33.450950 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:40:33.461201 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:40:33.474053 ignition[696]: Ignition 2.18.0 Oct 8 19:40:33.474068 ignition[696]: Stage: fetch-offline Oct 8 19:40:33.474127 ignition[696]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:33.474136 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:33.474236 ignition[696]: parsed url from cmdline: "" Oct 8 19:40:33.478015 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:40:33.474239 ignition[696]: no config URL provided Oct 8 19:40:33.474244 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:40:33.474252 ignition[696]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:40:33.474257 ignition[696]: failed to fetch config: resource requires networking Oct 8 19:40:33.474439 ignition[696]: Ignition finished successfully Oct 8 19:40:33.486537 systemd-networkd[776]: lo: Link UP Oct 8 19:40:33.486550 systemd-networkd[776]: lo: Gained carrier Oct 8 19:40:33.488161 systemd-networkd[776]: Enumeration completed Oct 8 19:40:33.488506 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:40:33.489465 systemd[1]: Reached target network.target - Network. Oct 8 19:40:33.489714 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:33.489717 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:33.491037 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:33.491041 systemd-networkd[776]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:33.492219 systemd-networkd[776]: eth0: Link UP Oct 8 19:40:33.492222 systemd-networkd[776]: eth0: Gained carrier Oct 8 19:40:33.492231 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:33.496123 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:40:33.497589 systemd-networkd[776]: eth1: Link UP Oct 8 19:40:33.497592 systemd-networkd[776]: eth1: Gained carrier Oct 8 19:40:33.497601 systemd-networkd[776]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:33.510527 ignition[780]: Ignition 2.18.0 Oct 8 19:40:33.510537 ignition[780]: Stage: fetch Oct 8 19:40:33.510721 ignition[780]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:33.510733 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:33.510821 ignition[780]: parsed url from cmdline: "" Oct 8 19:40:33.510824 ignition[780]: no config URL provided Oct 8 19:40:33.510829 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:40:33.510837 ignition[780]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:40:33.510857 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 19:40:33.511723 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 19:40:33.533043 systemd-networkd[776]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:40:33.626035 systemd-networkd[776]: eth0: DHCPv4 address 188.245.170.239/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:40:33.712767 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 19:40:33.720034 ignition[780]: GET result: OK Oct 8 19:40:33.720192 ignition[780]: parsing config with SHA512: 5aba695699fd35ee855c884086e6cee10add419a9eeb4c3eb1591a7504a0383468f629efff9b56c8a4b9d4430a70612706a29504aedc353728f611bd4492367a Oct 8 19:40:33.727203 unknown[780]: fetched base config from "system" Oct 8 19:40:33.727219 unknown[780]: fetched base config from "system" Oct 8 19:40:33.728419 ignition[780]: fetch: fetch complete Oct 8 19:40:33.727229 unknown[780]: fetched user config from "hetzner" Oct 8 19:40:33.728430 ignition[780]: fetch: fetch passed Oct 8 19:40:33.729152 ignition[780]: Ignition finished successfully Oct 8 19:40:33.731907 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:40:33.737255 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:40:33.751460 ignition[788]: Ignition 2.18.0 Oct 8 19:40:33.751475 ignition[788]: Stage: kargs Oct 8 19:40:33.751663 ignition[788]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:33.751672 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:33.752703 ignition[788]: kargs: kargs passed Oct 8 19:40:33.752763 ignition[788]: Ignition finished successfully Oct 8 19:40:33.754178 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:40:33.762089 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:40:33.774574 ignition[795]: Ignition 2.18.0 Oct 8 19:40:33.774586 ignition[795]: Stage: disks Oct 8 19:40:33.774795 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:33.774808 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:33.775908 ignition[795]: disks: disks passed Oct 8 19:40:33.778189 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:40:33.775983 ignition[795]: Ignition finished successfully Oct 8 19:40:33.778931 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:40:33.783022 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:40:33.783747 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:40:33.784868 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:40:33.785945 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:40:33.791097 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:40:33.812080 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 19:40:33.815233 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:40:33.824992 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:40:33.879948 kernel: EXT4-fs (sda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:40:33.880004 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:40:33.881210 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:40:33.888045 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:40:33.891047 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:40:33.895294 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 19:40:33.895937 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:40:33.895971 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:40:33.907000 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Oct 8 19:40:33.910211 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:40:33.913043 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:33.913067 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:33.913078 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:40:33.917872 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:40:33.917957 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:40:33.918153 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:40:33.928646 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:40:33.976079 coreos-metadata[814]: Oct 08 19:40:33.975 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 19:40:33.977772 coreos-metadata[814]: Oct 08 19:40:33.977 INFO Fetch successful Oct 8 19:40:33.979134 coreos-metadata[814]: Oct 08 19:40:33.979 INFO wrote hostname ci-3975-2-2-0-004c89fa14 to /sysroot/etc/hostname Oct 8 19:40:33.982670 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:40:33.989104 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:40:33.994021 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:40:33.998963 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:40:34.003604 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:40:34.107098 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:40:34.113042 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:40:34.115097 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:40:34.129256 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:34.152103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:40:34.155102 ignition[929]: INFO : Ignition 2.18.0 Oct 8 19:40:34.155759 ignition[929]: INFO : Stage: mount Oct 8 19:40:34.156367 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:34.157987 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:34.157987 ignition[929]: INFO : mount: mount passed Oct 8 19:40:34.157987 ignition[929]: INFO : Ignition finished successfully Oct 8 19:40:34.160235 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:40:34.165087 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:40:34.261974 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:40:34.269205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:40:34.290975 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Oct 8 19:40:34.293652 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:34.293723 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:34.293741 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:40:34.297156 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:40:34.297228 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:40:34.300155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:40:34.324071 ignition[958]: INFO : Ignition 2.18.0 Oct 8 19:40:34.324071 ignition[958]: INFO : Stage: files Oct 8 19:40:34.326763 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:34.326763 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:34.326763 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:40:34.330630 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:40:34.330630 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:40:34.332814 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:40:34.332814 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:40:34.335020 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:40:34.333891 unknown[958]: wrote ssh authorized keys file for user: core Oct 8 19:40:34.336951 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:40:34.336951 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:40:34.587479 systemd-networkd[776]: eth1: Gained IPv6LL Oct 8 19:40:34.632313 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:40:35.395577 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:40:35.397621 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Oct 8 19:40:35.483533 systemd-networkd[776]: eth0: Gained IPv6LL Oct 8 19:40:36.036696 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:40:36.331747 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:40:36.331747 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:40:36.334193 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:40:36.335189 ignition[958]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:40:36.335189 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:40:36.335189 ignition[958]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:40:36.335189 ignition[958]: INFO : files: files passed Oct 8 19:40:36.335189 ignition[958]: INFO : Ignition finished successfully Oct 8 19:40:36.337320 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:40:36.347250 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:40:36.350112 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:40:36.360601 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:40:36.362184 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:40:36.372335 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:40:36.372335 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:40:36.374994 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:40:36.377586 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:40:36.378608 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:40:36.385178 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:40:36.412701 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:40:36.413010 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:40:36.415756 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:40:36.417191 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:40:36.418820 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:40:36.425567 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:40:36.442569 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:40:36.453127 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:40:36.464620 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:40:36.465610 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:40:36.467041 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:40:36.468630 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:40:36.468796 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:40:36.470419 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:40:36.471076 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:40:36.472458 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:40:36.473724 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:40:36.474796 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:40:36.475979 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:40:36.477249 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:40:36.478338 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:40:36.479282 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:40:36.480286 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:40:36.481093 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:40:36.481264 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:40:36.482380 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:40:36.483538 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:40:36.484517 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:40:36.484705 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:40:36.485616 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:40:36.485782 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:40:36.487125 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:40:36.487288 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:40:36.488434 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:40:36.488589 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:40:36.489369 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 19:40:36.489520 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:40:36.501524 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:40:36.502069 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:40:36.502243 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:40:36.506222 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:40:36.506734 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:40:36.506909 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:40:36.507941 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:40:36.508087 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:40:36.521552 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:40:36.522328 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:40:36.525833 ignition[1011]: INFO : Ignition 2.18.0 Oct 8 19:40:36.525833 ignition[1011]: INFO : Stage: umount Oct 8 19:40:36.526793 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:36.526793 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:36.527892 ignition[1011]: INFO : umount: umount passed Oct 8 19:40:36.527892 ignition[1011]: INFO : Ignition finished successfully Oct 8 19:40:36.529891 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:40:36.530065 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:40:36.532541 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:40:36.532628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:40:36.534321 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:40:36.534381 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:40:36.541687 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:40:36.542356 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:40:36.543563 systemd[1]: Stopped target network.target - Network. Oct 8 19:40:36.544781 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:40:36.544862 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:40:36.546227 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:40:36.547694 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:40:36.551025 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:40:36.553961 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:40:36.555546 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:40:36.557316 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:40:36.557397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:40:36.559027 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:40:36.559088 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:40:36.561577 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:40:36.561673 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:40:36.562780 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:40:36.562842 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:40:36.564317 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:40:36.565630 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:40:36.569747 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:40:36.569945 systemd-networkd[776]: eth0: DHCPv6 lease lost Oct 8 19:40:36.571351 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:40:36.571468 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:40:36.575850 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:40:36.575995 systemd-networkd[776]: eth1: DHCPv6 lease lost Oct 8 19:40:36.576034 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:40:36.579299 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:40:36.579430 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:40:36.581329 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:40:36.581397 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:40:36.589083 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:40:36.591950 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:40:36.592029 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:40:36.593143 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:40:36.593192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:40:36.594390 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:40:36.594432 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:40:36.595199 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:40:36.610803 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:40:36.611991 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:40:36.614784 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:40:36.615974 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:40:36.617570 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:40:36.617633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:40:36.618995 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:40:36.619042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:40:36.620461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:40:36.620530 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:40:36.622257 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:40:36.622323 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:40:36.624705 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:40:36.624763 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:36.637195 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:40:36.637872 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:40:36.637959 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:40:36.644204 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:40:36.644286 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:36.650043 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:40:36.650151 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:40:36.661571 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:40:36.661766 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:40:36.664341 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:40:36.665108 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:40:36.665181 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:40:36.673172 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:40:36.682483 systemd[1]: Switching root. Oct 8 19:40:36.705826 systemd-journald[236]: Journal stopped Oct 8 19:40:37.584183 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Oct 8 19:40:37.584249 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:40:37.584262 kernel: SELinux: policy capability open_perms=1 Oct 8 19:40:37.584272 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:40:37.584284 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:40:37.584296 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:40:37.584308 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:40:37.584317 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:40:37.584326 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:40:37.584336 kernel: audit: type=1403 audit(1728416436.896:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:40:37.584346 systemd[1]: Successfully loaded SELinux policy in 33.596ms. Oct 8 19:40:37.584368 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.528ms. Oct 8 19:40:37.584380 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:40:37.584391 systemd[1]: Detected virtualization kvm. Oct 8 19:40:37.584401 systemd[1]: Detected architecture arm64. Oct 8 19:40:37.584411 systemd[1]: Detected first boot. Oct 8 19:40:37.584425 systemd[1]: Hostname set to . Oct 8 19:40:37.584435 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:40:37.584445 zram_generator::config[1055]: No configuration found. Oct 8 19:40:37.584456 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:40:37.584468 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:40:37.584480 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:40:37.584491 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:40:37.584502 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:40:37.584512 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:40:37.584525 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:40:37.584550 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:40:37.584565 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:40:37.584575 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:40:37.584587 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:40:37.584598 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:40:37.584608 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:40:37.584618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:40:37.584628 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:40:37.584639 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:40:37.584649 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:40:37.584659 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:40:37.584671 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:40:37.584681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:40:37.584691 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:40:37.584701 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:40:37.584711 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:40:37.584721 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:40:37.584733 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:40:37.584744 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:40:37.584754 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:40:37.584764 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:40:37.584774 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:40:37.584785 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:40:37.584795 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:40:37.584806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:40:37.584816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:40:37.584826 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:40:37.584838 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:40:37.584848 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:40:37.584858 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:40:37.584868 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:40:37.584878 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:40:37.584888 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:40:37.584899 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:40:37.584909 systemd[1]: Reached target machines.target - Containers. Oct 8 19:40:37.584949 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:40:37.584962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:37.584973 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:40:37.584985 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:40:37.584997 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:40:37.585009 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:40:37.585021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:40:37.585031 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:40:37.585042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:40:37.585052 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:40:37.585062 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:40:37.585072 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:40:37.585082 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:40:37.585092 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:40:37.585104 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:40:37.585114 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:40:37.585125 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:40:37.585135 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:40:37.585145 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:40:37.585156 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:40:37.585166 systemd[1]: Stopped verity-setup.service. Oct 8 19:40:37.585178 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:40:37.585188 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:40:37.585200 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:40:37.585235 systemd-journald[1117]: Collecting audit messages is disabled. Oct 8 19:40:37.585258 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:40:37.585268 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:40:37.585281 systemd-journald[1117]: Journal started Oct 8 19:40:37.585303 systemd-journald[1117]: Runtime Journal (/run/log/journal/c572822b6a7d41d594f4b7baef672757) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:40:37.364846 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:40:37.392967 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 19:40:37.393452 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:40:37.588117 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:40:37.588762 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:40:37.594953 kernel: ACPI: bus type drm_connector registered Oct 8 19:40:37.597220 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:40:37.598305 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:40:37.598997 kernel: fuse: init (API version 7.39) Oct 8 19:40:37.598951 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:40:37.599879 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:40:37.600191 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:40:37.601672 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:40:37.601795 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:40:37.604449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:40:37.604962 kernel: loop: module loaded Oct 8 19:40:37.605121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:40:37.609256 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:40:37.609381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:40:37.610268 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:40:37.610384 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:40:37.612158 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:40:37.631282 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:40:37.636225 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:40:37.638097 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:40:37.639896 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:40:37.646207 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:40:37.654099 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:40:37.657026 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:40:37.657077 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:40:37.658739 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:40:37.662127 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:40:37.670990 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:40:37.671692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:37.674265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:40:37.681139 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:40:37.681776 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:40:37.684294 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:40:37.686648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:40:37.690159 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:40:37.694173 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:40:37.701223 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:40:37.709596 systemd-journald[1117]: Time spent on flushing to /var/log/journal/c572822b6a7d41d594f4b7baef672757 is 28.892ms for 1121 entries. Oct 8 19:40:37.709596 systemd-journald[1117]: System Journal (/var/log/journal/c572822b6a7d41d594f4b7baef672757) is 8.0M, max 584.8M, 576.8M free. Oct 8 19:40:37.746719 systemd-journald[1117]: Received client request to flush runtime journal. Oct 8 19:40:37.746767 kernel: loop0: detected capacity change from 0 to 113672 Oct 8 19:40:37.746783 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:40:37.715355 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:40:37.716138 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:40:37.717040 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:40:37.750394 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:40:37.757204 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:40:37.758892 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:40:37.772234 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:40:37.776961 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:40:37.780339 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:40:37.791117 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:40:37.797964 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:40:37.808010 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:40:37.810443 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:40:37.824229 kernel: loop1: detected capacity change from 0 to 59688 Oct 8 19:40:37.825326 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:40:37.835064 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:40:37.842581 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:40:37.858964 kernel: loop2: detected capacity change from 0 to 189592 Oct 8 19:40:37.873742 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Oct 8 19:40:37.874132 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Oct 8 19:40:37.881393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:40:37.908942 kernel: loop3: detected capacity change from 0 to 8 Oct 8 19:40:37.931973 kernel: loop4: detected capacity change from 0 to 113672 Oct 8 19:40:37.951146 kernel: loop5: detected capacity change from 0 to 59688 Oct 8 19:40:37.967162 kernel: loop6: detected capacity change from 0 to 189592 Oct 8 19:40:37.988942 kernel: loop7: detected capacity change from 0 to 8 Oct 8 19:40:37.990375 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 19:40:37.994126 (sd-merge)[1194]: Merged extensions into '/usr'. Oct 8 19:40:38.002020 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:40:38.002047 systemd[1]: Reloading... Oct 8 19:40:38.123972 zram_generator::config[1218]: No configuration found. Oct 8 19:40:38.227860 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:40:38.268506 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:40:38.316297 systemd[1]: Reloading finished in 313 ms. Oct 8 19:40:38.338726 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:40:38.342662 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:40:38.356454 systemd[1]: Starting ensure-sysext.service... Oct 8 19:40:38.360119 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:40:38.378128 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:40:38.378153 systemd[1]: Reloading... Oct 8 19:40:38.400876 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:40:38.401205 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:40:38.402342 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:40:38.403305 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Oct 8 19:40:38.403514 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Oct 8 19:40:38.409043 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:40:38.409410 systemd-tmpfiles[1256]: Skipping /boot Oct 8 19:40:38.429882 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:40:38.429899 systemd-tmpfiles[1256]: Skipping /boot Oct 8 19:40:38.475705 zram_generator::config[1281]: No configuration found. Oct 8 19:40:38.576790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:40:38.623603 systemd[1]: Reloading finished in 245 ms. Oct 8 19:40:38.643234 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:40:38.649622 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:40:38.661196 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:40:38.664124 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:40:38.670521 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:40:38.678132 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:40:38.682171 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:40:38.688115 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:40:38.697185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:38.711911 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:40:38.715199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:40:38.719677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:40:38.720386 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:38.725005 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:40:38.735347 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:40:38.745198 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:40:38.746545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:40:38.747986 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:40:38.749259 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:40:38.750986 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:40:38.766205 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:38.766551 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Oct 8 19:40:38.775276 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:40:38.779271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:40:38.781720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:38.783956 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:40:38.787464 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:40:38.787601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:40:38.789061 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:40:38.790791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:40:38.791071 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:40:38.794325 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:40:38.794718 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:40:38.803648 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:40:38.811768 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:38.819182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:40:38.822786 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:40:38.826217 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:40:38.831229 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:40:38.832702 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:38.832830 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:40:38.834523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:40:38.838978 systemd[1]: Finished ensure-sysext.service. Oct 8 19:40:38.841094 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:40:38.841232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:40:38.858241 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:40:38.860822 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:40:38.873100 augenrules[1368]: No rules Oct 8 19:40:38.878719 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:40:38.881482 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:40:38.884778 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:40:38.887299 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:40:38.889849 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:40:38.891291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:40:38.901764 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:40:38.911889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:40:38.912698 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:40:38.915585 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:40:38.985378 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:40:38.986343 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:40:38.989708 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:40:39.013704 systemd-resolved[1327]: Positive Trust Anchors: Oct 8 19:40:39.022693 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:40:39.022849 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:40:39.035635 systemd-resolved[1327]: Using system hostname 'ci-3975-2-2-0-004c89fa14'. Oct 8 19:40:39.050691 systemd-networkd[1366]: lo: Link UP Oct 8 19:40:39.050702 systemd-networkd[1366]: lo: Gained carrier Oct 8 19:40:39.053222 systemd-networkd[1366]: Enumeration completed Oct 8 19:40:39.053361 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:40:39.054298 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:39.054302 systemd-networkd[1366]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:39.055281 systemd-networkd[1366]: eth1: Link UP Oct 8 19:40:39.055291 systemd-networkd[1366]: eth1: Gained carrier Oct 8 19:40:39.055305 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:39.062567 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1388) Oct 8 19:40:39.069245 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:40:39.074414 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:40:39.075650 systemd[1]: Reached target network.target - Network. Oct 8 19:40:39.076427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:40:39.110027 systemd-networkd[1366]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:40:39.110627 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:40:39.130297 systemd-networkd[1366]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:39.131322 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:39.131415 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:39.133563 systemd-networkd[1366]: eth0: Link UP Oct 8 19:40:39.133583 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:40:39.133789 systemd-networkd[1366]: eth0: Gained carrier Oct 8 19:40:39.133887 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:39.138229 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:40:39.150957 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:40:39.179013 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1386) Oct 8 19:40:39.233028 systemd-networkd[1366]: eth0: DHCPv4 address 188.245.170.239/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:40:39.234676 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:40:39.235293 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:40:39.258068 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:39.261577 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:40:39.267194 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 8 19:40:39.267290 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 19:40:39.267313 kernel: [drm] features: -context_init Oct 8 19:40:39.267330 kernel: [drm] number of scanouts: 1 Oct 8 19:40:39.267346 kernel: [drm] number of cap sets: 0 Oct 8 19:40:39.270943 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 19:40:39.273297 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:40:39.275010 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 19:40:39.282345 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 19:40:39.297187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:40:39.301010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:39.312185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:39.313309 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:40:39.368230 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:39.419481 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:40:39.426157 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:40:39.453956 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:40:39.484253 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:40:39.486145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:40:39.487486 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:40:39.489073 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:40:39.490727 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:40:39.492496 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:40:39.493575 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:40:39.494477 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:40:39.495257 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:40:39.495371 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:40:39.495933 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:40:39.497961 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:40:39.500064 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:40:39.506197 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:40:39.508979 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:40:39.510350 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:40:39.511190 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:40:39.511804 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:40:39.512654 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:40:39.512693 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:40:39.522601 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:40:39.525118 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:40:39.527552 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:40:39.529142 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:40:39.531004 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:40:39.544253 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:40:39.544988 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:40:39.546711 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:40:39.552284 jq[1447]: false Oct 8 19:40:39.553006 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:40:39.556252 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:40:39.562148 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:40:39.568132 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:40:39.569624 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:40:39.571339 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:40:39.572054 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:40:39.577829 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:40:39.581045 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:40:39.592435 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:40:39.592627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:40:39.592901 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:40:39.593055 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:40:39.616004 dbus-daemon[1446]: [system] SELinux support is enabled Oct 8 19:40:39.620340 jq[1456]: true Oct 8 19:40:39.620655 extend-filesystems[1448]: Found loop4 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found loop5 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found loop6 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found loop7 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda1 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda2 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda3 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found usr Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda4 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda6 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda7 Oct 8 19:40:39.620655 extend-filesystems[1448]: Found sda9 Oct 8 19:40:39.620655 extend-filesystems[1448]: Checking size of /dev/sda9 Oct 8 19:40:39.627706 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:40:39.648475 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:40:39.648553 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:40:39.652114 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:40:39.652147 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:40:39.653193 extend-filesystems[1448]: Resized partition /dev/sda9 Oct 8 19:40:39.663762 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:40:39.676060 jq[1471]: true Oct 8 19:40:39.679716 update_engine[1455]: I1008 19:40:39.679514 1455 main.cc:92] Flatcar Update Engine starting Oct 8 19:40:39.689502 extend-filesystems[1479]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:40:39.690722 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:40:39.693660 update_engine[1455]: I1008 19:40:39.693447 1455 update_check_scheduler.cc:74] Next update check in 11m7s Oct 8 19:40:39.697159 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:40:39.704030 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 19:40:39.713691 coreos-metadata[1445]: Oct 08 19:40:39.713 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 19:40:39.715788 coreos-metadata[1445]: Oct 08 19:40:39.715 INFO Fetch successful Oct 8 19:40:39.717773 coreos-metadata[1445]: Oct 08 19:40:39.717 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 19:40:39.718872 coreos-metadata[1445]: Oct 08 19:40:39.718 INFO Fetch successful Oct 8 19:40:39.730546 tar[1466]: linux-arm64/helm Oct 8 19:40:39.741358 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:40:39.741845 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:40:39.860970 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1384) Oct 8 19:40:39.894871 systemd-logind[1454]: New seat seat0. Oct 8 19:40:39.906291 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:40:39.906308 systemd-logind[1454]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 8 19:40:39.916258 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:40:39.925793 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:40:39.933231 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 19:40:39.932442 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:40:39.937306 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:40:39.940592 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:40:39.948412 systemd[1]: Starting sshkeys.service... Oct 8 19:40:39.963010 containerd[1473]: time="2024-10-08T19:40:39.962158360Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:40:39.978032 extend-filesystems[1479]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 19:40:39.978032 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 19:40:39.978032 extend-filesystems[1479]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 19:40:39.990030 extend-filesystems[1448]: Resized filesystem in /dev/sda9 Oct 8 19:40:39.990030 extend-filesystems[1448]: Found sr0 Oct 8 19:40:39.981247 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:40:39.981452 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:40:39.998551 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:40:40.009254 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:40:40.050709 coreos-metadata[1524]: Oct 08 19:40:40.050 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 19:40:40.053935 coreos-metadata[1524]: Oct 08 19:40:40.053 INFO Fetch successful Oct 8 19:40:40.057238 unknown[1524]: wrote ssh authorized keys file for user: core Oct 8 19:40:40.063178 containerd[1473]: time="2024-10-08T19:40:40.063108320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:40:40.063178 containerd[1473]: time="2024-10-08T19:40:40.063162520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:40.066562 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068010120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068058160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068295680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068317120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068389240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068432600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068444280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068500480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068729000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068747880Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:40:40.072180 containerd[1473]: time="2024-10-08T19:40:40.068757960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072390 containerd[1473]: time="2024-10-08T19:40:40.068852000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:40.072390 containerd[1473]: time="2024-10-08T19:40:40.068866000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:40:40.072390 containerd[1473]: time="2024-10-08T19:40:40.069409040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:40:40.072390 containerd[1473]: time="2024-10-08T19:40:40.069431440Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078519800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078564000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078580520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078616720Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078632080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078643720Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078655960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078890160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078932040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078951200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078966360Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.078981680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.079000160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.080663 containerd[1473]: time="2024-10-08T19:40:40.079013720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079027760Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079044000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079057880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079070120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079084680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079189080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079545920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079574960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079588800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079613160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079741080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079756480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079773960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081047 containerd[1473]: time="2024-10-08T19:40:40.079785560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.079798200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.079812040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.079824520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.079836160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.079849120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080038480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080058280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080072680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080086320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080103720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080119400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080131400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.081287 containerd[1473]: time="2024-10-08T19:40:40.080143560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:40:40.086813 containerd[1473]: time="2024-10-08T19:40:40.086350000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:40:40.086813 containerd[1473]: time="2024-10-08T19:40:40.086790480Z" level=info msg="Connect containerd service" Oct 8 19:40:40.087083 containerd[1473]: time="2024-10-08T19:40:40.086868520Z" level=info msg="using legacy CRI server" Oct 8 19:40:40.087083 containerd[1473]: time="2024-10-08T19:40:40.086878320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:40:40.088189 containerd[1473]: time="2024-10-08T19:40:40.088152480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:40:40.090681 update-ssh-keys[1532]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:40:40.092604 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:40:40.093680 containerd[1473]: time="2024-10-08T19:40:40.093356320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:40:40.094071 containerd[1473]: time="2024-10-08T19:40:40.093996240Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:40:40.094120 containerd[1473]: time="2024-10-08T19:40:40.094089960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:40:40.094120 containerd[1473]: time="2024-10-08T19:40:40.094105200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:40:40.095357 containerd[1473]: time="2024-10-08T19:40:40.094021040Z" level=info msg="Start subscribing containerd event" Oct 8 19:40:40.095357 containerd[1473]: time="2024-10-08T19:40:40.094392960Z" level=info msg="Start recovering state" Oct 8 19:40:40.095357 containerd[1473]: time="2024-10-08T19:40:40.094475680Z" level=info msg="Start event monitor" Oct 8 19:40:40.095357 containerd[1473]: time="2024-10-08T19:40:40.094488560Z" level=info msg="Start snapshots syncer" Oct 8 19:40:40.095357 containerd[1473]: time="2024-10-08T19:40:40.094498400Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:40:40.095357 containerd[1473]: time="2024-10-08T19:40:40.094505560Z" level=info msg="Start streaming server" Oct 8 19:40:40.098153 containerd[1473]: time="2024-10-08T19:40:40.096394680Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:40:40.099099 containerd[1473]: time="2024-10-08T19:40:40.098992440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:40:40.100479 containerd[1473]: time="2024-10-08T19:40:40.100448680Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:40:40.100625 containerd[1473]: time="2024-10-08T19:40:40.100609240Z" level=info msg="containerd successfully booted in 0.148056s" Oct 8 19:40:40.100849 systemd[1]: Finished sshkeys.service. Oct 8 19:40:40.104607 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:40:40.347129 systemd-networkd[1366]: eth1: Gained IPv6LL Oct 8 19:40:40.348020 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:40:40.348374 systemd-networkd[1366]: eth0: Gained IPv6LL Oct 8 19:40:40.348684 systemd-timesyncd[1369]: Network configuration changed, trying to establish connection. Oct 8 19:40:40.356297 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:40:40.360275 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:40:40.374700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:40:40.378219 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:40:40.430423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:40:40.448025 tar[1466]: linux-arm64/LICENSE Oct 8 19:40:40.448121 tar[1466]: linux-arm64/README.md Oct 8 19:40:40.466163 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:40:40.666126 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:40:40.709962 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:40:40.716621 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:40:40.723519 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:40:40.723746 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:40:40.728308 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:40:40.749606 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:40:40.760098 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:40:40.763236 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:40:40.765457 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:40:41.220630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:40:41.221903 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:40:41.227545 systemd[1]: Startup finished in 805ms (kernel) + 6.174s (initrd) + 4.363s (userspace) = 11.343s. Oct 8 19:40:41.234660 (kubelet)[1573]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:40:41.896856 kubelet[1573]: E1008 19:40:41.896785 1573 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:40:41.900881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:40:41.901052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:40:52.151443 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:40:52.161227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:40:52.368743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:40:52.386523 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:40:52.435675 kubelet[1592]: E1008 19:40:52.435517 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:40:52.439404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:40:52.439835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:02.690594 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:41:02.699272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:02.803624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:02.808936 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:02.858607 kubelet[1607]: E1008 19:41:02.858541 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:02.860792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:02.861070 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:10.603590 systemd-timesyncd[1369]: Contacted time server 136.243.7.20:123 (2.flatcar.pool.ntp.org). Oct 8 19:41:10.603667 systemd-timesyncd[1369]: Initial clock synchronization to Tue 2024-10-08 19:41:10.210758 UTC. Oct 8 19:41:13.111821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:41:13.121351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:13.231251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:13.235363 (kubelet)[1623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:13.282676 kubelet[1623]: E1008 19:41:13.282563 1623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:13.285403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:13.285571 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:23.433576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 19:41:23.441217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:23.537257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:23.542484 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:23.583405 kubelet[1638]: E1008 19:41:23.583283 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:23.586103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:23.586288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:24.929010 update_engine[1455]: I1008 19:41:24.928521 1455 update_attempter.cc:509] Updating boot flags... Oct 8 19:41:24.991998 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1654) Oct 8 19:41:25.051850 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1650) Oct 8 19:41:33.684010 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 19:41:33.693317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:33.795526 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:33.806291 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:33.850116 kubelet[1671]: E1008 19:41:33.850071 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:33.853344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:33.853487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:43.933836 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 19:41:43.946719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:44.065634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:44.073168 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:44.123453 kubelet[1687]: E1008 19:41:44.123379 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:44.126184 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:44.126367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:54.184067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 19:41:54.190499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:54.322136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:54.335474 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:54.393052 kubelet[1702]: E1008 19:41:54.392980 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:54.395721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:54.396180 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:04.433556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 19:42:04.444182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:04.576558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:04.582650 (kubelet)[1718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:04.624371 kubelet[1718]: E1008 19:42:04.624258 1718 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:04.627294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:04.627504 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:14.684113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 19:42:14.697375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:14.821791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:14.836457 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:14.892572 kubelet[1733]: E1008 19:42:14.892491 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:14.895484 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:14.895665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:24.934144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 19:42:24.943218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:25.093072 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:25.103507 (kubelet)[1748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:25.146052 kubelet[1748]: E1008 19:42:25.145975 1748 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:25.149221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:25.149377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:30.409619 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:42:30.420528 systemd[1]: Started sshd@0-188.245.170.239:22-139.178.89.65:38768.service - OpenSSH per-connection server daemon (139.178.89.65:38768). Oct 8 19:42:31.403743 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 38768 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:31.408447 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:31.420736 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:42:31.428299 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:42:31.431171 systemd-logind[1454]: New session 1 of user core. Oct 8 19:42:31.439646 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:42:31.446287 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:42:31.451164 (systemd)[1760]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:31.564431 systemd[1760]: Queued start job for default target default.target. Oct 8 19:42:31.575575 systemd[1760]: Created slice app.slice - User Application Slice. Oct 8 19:42:31.575621 systemd[1760]: Reached target paths.target - Paths. Oct 8 19:42:31.575643 systemd[1760]: Reached target timers.target - Timers. Oct 8 19:42:31.577763 systemd[1760]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:42:31.594605 systemd[1760]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:42:31.594739 systemd[1760]: Reached target sockets.target - Sockets. Oct 8 19:42:31.594757 systemd[1760]: Reached target basic.target - Basic System. Oct 8 19:42:31.594807 systemd[1760]: Reached target default.target - Main User Target. Oct 8 19:42:31.594842 systemd[1760]: Startup finished in 136ms. Oct 8 19:42:31.594973 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:42:31.611714 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:42:32.316291 systemd[1]: Started sshd@1-188.245.170.239:22-139.178.89.65:38774.service - OpenSSH per-connection server daemon (139.178.89.65:38774). Oct 8 19:42:33.311780 sshd[1771]: Accepted publickey for core from 139.178.89.65 port 38774 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:33.314032 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:33.320974 systemd-logind[1454]: New session 2 of user core. Oct 8 19:42:33.326294 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:42:34.003909 sshd[1771]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:34.011135 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:42:34.012464 systemd[1]: sshd@1-188.245.170.239:22-139.178.89.65:38774.service: Deactivated successfully. Oct 8 19:42:34.014911 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:42:34.015879 systemd-logind[1454]: Removed session 2. Oct 8 19:42:34.176293 systemd[1]: Started sshd@2-188.245.170.239:22-139.178.89.65:38780.service - OpenSSH per-connection server daemon (139.178.89.65:38780). Oct 8 19:42:35.150526 sshd[1778]: Accepted publickey for core from 139.178.89.65 port 38780 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:35.152410 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:35.153316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 19:42:35.162372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:35.167154 systemd-logind[1454]: New session 3 of user core. Oct 8 19:42:35.172016 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:42:35.293429 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:35.307504 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:35.365450 kubelet[1789]: E1008 19:42:35.365399 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:35.369344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:35.369535 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:35.823281 sshd[1778]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:35.828852 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:42:35.829087 systemd[1]: sshd@2-188.245.170.239:22-139.178.89.65:38780.service: Deactivated successfully. Oct 8 19:42:35.831548 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:42:35.833624 systemd-logind[1454]: Removed session 3. Oct 8 19:42:36.008353 systemd[1]: Started sshd@3-188.245.170.239:22-139.178.89.65:42246.service - OpenSSH per-connection server daemon (139.178.89.65:42246). Oct 8 19:42:36.985010 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 42246 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:36.987953 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:36.994688 systemd-logind[1454]: New session 4 of user core. Oct 8 19:42:37.009282 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:42:37.664770 sshd[1800]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:37.669465 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:42:37.669655 systemd[1]: sshd@3-188.245.170.239:22-139.178.89.65:42246.service: Deactivated successfully. Oct 8 19:42:37.672617 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:42:37.676852 systemd-logind[1454]: Removed session 4. Oct 8 19:42:37.844395 systemd[1]: Started sshd@4-188.245.170.239:22-139.178.89.65:42248.service - OpenSSH per-connection server daemon (139.178.89.65:42248). Oct 8 19:42:38.836338 sshd[1807]: Accepted publickey for core from 139.178.89.65 port 42248 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:38.838483 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:38.844980 systemd-logind[1454]: New session 5 of user core. Oct 8 19:42:38.853155 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:42:39.378805 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:42:39.379123 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:39.397044 sudo[1810]: pam_unix(sudo:session): session closed for user root Oct 8 19:42:39.560173 sshd[1807]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:39.565810 systemd[1]: sshd@4-188.245.170.239:22-139.178.89.65:42248.service: Deactivated successfully. Oct 8 19:42:39.567999 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:42:39.568907 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:42:39.571159 systemd-logind[1454]: Removed session 5. Oct 8 19:42:39.728076 systemd[1]: Started sshd@5-188.245.170.239:22-139.178.89.65:42258.service - OpenSSH per-connection server daemon (139.178.89.65:42258). Oct 8 19:42:40.729993 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 42258 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:40.732216 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:40.738301 systemd-logind[1454]: New session 6 of user core. Oct 8 19:42:40.746154 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:42:41.253867 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:42:41.254710 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:41.259331 sudo[1819]: pam_unix(sudo:session): session closed for user root Oct 8 19:42:41.265873 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:42:41.266602 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:41.286363 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:42:41.288471 auditctl[1822]: No rules Oct 8 19:42:41.288848 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:42:41.289068 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:42:41.292963 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:42:41.341039 augenrules[1840]: No rules Oct 8 19:42:41.342581 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:42:41.345156 sudo[1818]: pam_unix(sudo:session): session closed for user root Oct 8 19:42:41.504981 sshd[1815]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:41.511024 systemd[1]: sshd@5-188.245.170.239:22-139.178.89.65:42258.service: Deactivated successfully. Oct 8 19:42:41.514577 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:42:41.517030 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:42:41.518137 systemd-logind[1454]: Removed session 6. Oct 8 19:42:41.685447 systemd[1]: Started sshd@6-188.245.170.239:22-139.178.89.65:42262.service - OpenSSH per-connection server daemon (139.178.89.65:42262). Oct 8 19:42:42.672553 sshd[1848]: Accepted publickey for core from 139.178.89.65 port 42262 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:42.675340 sshd[1848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:42.682120 systemd-logind[1454]: New session 7 of user core. Oct 8 19:42:42.691160 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:42:43.199689 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:42:43.200171 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:43.332200 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:42:43.334934 (dockerd)[1860]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:42:43.591964 dockerd[1860]: time="2024-10-08T19:42:43.589788523Z" level=info msg="Starting up" Oct 8 19:42:43.633885 dockerd[1860]: time="2024-10-08T19:42:43.633818056Z" level=info msg="Loading containers: start." Oct 8 19:42:43.752941 kernel: Initializing XFRM netlink socket Oct 8 19:42:43.823047 systemd-networkd[1366]: docker0: Link UP Oct 8 19:42:43.845381 dockerd[1860]: time="2024-10-08T19:42:43.845253102Z" level=info msg="Loading containers: done." Oct 8 19:42:43.919393 dockerd[1860]: time="2024-10-08T19:42:43.919347619Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:42:43.919607 dockerd[1860]: time="2024-10-08T19:42:43.919567748Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:42:43.919724 dockerd[1860]: time="2024-10-08T19:42:43.919703073Z" level=info msg="Daemon has completed initialization" Oct 8 19:42:43.954743 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:42:43.955197 dockerd[1860]: time="2024-10-08T19:42:43.955029864Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:42:44.619908 containerd[1473]: time="2024-10-08T19:42:44.619508771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 19:42:45.294711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733841765.mount: Deactivated successfully. Oct 8 19:42:45.433425 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 19:42:45.444703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:45.576190 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:45.578621 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:45.627028 kubelet[2016]: E1008 19:42:45.626906 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:45.629721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:45.629987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:46.406754 containerd[1473]: time="2024-10-08T19:42:46.406698366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:46.407940 containerd[1473]: time="2024-10-08T19:42:46.407894692Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=25691613" Oct 8 19:42:46.408806 containerd[1473]: time="2024-10-08T19:42:46.408733044Z" level=info msg="ImageCreate event name:\"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:46.411768 containerd[1473]: time="2024-10-08T19:42:46.411707438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:46.413298 containerd[1473]: time="2024-10-08T19:42:46.413076370Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"25688321\" in 1.793517638s" Oct 8 19:42:46.413298 containerd[1473]: time="2024-10-08T19:42:46.413122972Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\"" Oct 8 19:42:46.414171 containerd[1473]: time="2024-10-08T19:42:46.413955324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 19:42:48.314062 containerd[1473]: time="2024-10-08T19:42:48.313966643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:48.315689 containerd[1473]: time="2024-10-08T19:42:48.315641474Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=22460106" Oct 8 19:42:48.316484 containerd[1473]: time="2024-10-08T19:42:48.316392030Z" level=info msg="ImageCreate event name:\"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:48.321395 containerd[1473]: time="2024-10-08T19:42:48.321224405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:48.325206 containerd[1473]: time="2024-10-08T19:42:48.325133025Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"23947353\" in 1.911104539s" Oct 8 19:42:48.325325 containerd[1473]: time="2024-10-08T19:42:48.325222424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\"" Oct 8 19:42:48.325821 containerd[1473]: time="2024-10-08T19:42:48.325789981Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 19:42:50.109944 containerd[1473]: time="2024-10-08T19:42:50.109882416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:50.111002 containerd[1473]: time="2024-10-08T19:42:50.110955933Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=17018578" Oct 8 19:42:50.112032 containerd[1473]: time="2024-10-08T19:42:50.112002049Z" level=info msg="ImageCreate event name:\"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:50.114971 containerd[1473]: time="2024-10-08T19:42:50.114904880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:50.118954 containerd[1473]: time="2024-10-08T19:42:50.118004711Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"18505843\" in 1.79217309s" Oct 8 19:42:50.118954 containerd[1473]: time="2024-10-08T19:42:50.118057911Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\"" Oct 8 19:42:50.120845 containerd[1473]: time="2024-10-08T19:42:50.120766062Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 19:42:51.459406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227931427.mount: Deactivated successfully. Oct 8 19:42:51.923478 containerd[1473]: time="2024-10-08T19:42:51.923347814Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:51.924977 containerd[1473]: time="2024-10-08T19:42:51.924902731Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=26753341" Oct 8 19:42:51.926276 containerd[1473]: time="2024-10-08T19:42:51.926177889Z" level=info msg="ImageCreate event name:\"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:51.928711 containerd[1473]: time="2024-10-08T19:42:51.928524684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:51.929628 containerd[1473]: time="2024-10-08T19:42:51.929223842Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"26752334\" in 1.80840658s" Oct 8 19:42:51.929628 containerd[1473]: time="2024-10-08T19:42:51.929265082Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\"" Oct 8 19:42:51.929831 containerd[1473]: time="2024-10-08T19:42:51.929750201Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:42:52.607462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67300793.mount: Deactivated successfully. Oct 8 19:42:53.339606 containerd[1473]: time="2024-10-08T19:42:53.339538848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:53.341512 containerd[1473]: time="2024-10-08T19:42:53.341468288Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Oct 8 19:42:53.342621 containerd[1473]: time="2024-10-08T19:42:53.342575888Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:53.345408 containerd[1473]: time="2024-10-08T19:42:53.345342927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:53.346708 containerd[1473]: time="2024-10-08T19:42:53.346547527Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.416767566s" Oct 8 19:42:53.346708 containerd[1473]: time="2024-10-08T19:42:53.346585527Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:42:53.347259 containerd[1473]: time="2024-10-08T19:42:53.347132727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 19:42:54.019744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2016530767.mount: Deactivated successfully. Oct 8 19:42:54.030043 containerd[1473]: time="2024-10-08T19:42:54.029977450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:54.030991 containerd[1473]: time="2024-10-08T19:42:54.030941490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Oct 8 19:42:54.031896 containerd[1473]: time="2024-10-08T19:42:54.031839051Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:54.034207 containerd[1473]: time="2024-10-08T19:42:54.034170693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:54.035473 containerd[1473]: time="2024-10-08T19:42:54.034865533Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 687.500886ms" Oct 8 19:42:54.035473 containerd[1473]: time="2024-10-08T19:42:54.034903093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 8 19:42:54.035763 containerd[1473]: time="2024-10-08T19:42:54.035738774Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 19:42:54.641056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3271573687.mount: Deactivated successfully. Oct 8 19:42:55.683636 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 19:42:55.693238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:55.806383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:55.820326 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:55.862260 kubelet[2180]: E1008 19:42:55.862175 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:55.864603 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:55.864755 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:57.730338 containerd[1473]: time="2024-10-08T19:42:57.730268139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:57.734614 containerd[1473]: time="2024-10-08T19:42:57.734553153Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=65868242" Oct 8 19:42:57.737072 containerd[1473]: time="2024-10-08T19:42:57.736996121Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:57.755245 containerd[1473]: time="2024-10-08T19:42:57.755158102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:57.758040 containerd[1473]: time="2024-10-08T19:42:57.757587390Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.721661136s" Oct 8 19:42:57.758040 containerd[1473]: time="2024-10-08T19:42:57.757647111Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Oct 8 19:43:01.103463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:01.113271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:01.153214 systemd[1]: Reloading requested from client PID 2219 ('systemctl') (unit session-7.scope)... Oct 8 19:43:01.153398 systemd[1]: Reloading... Oct 8 19:43:01.285963 zram_generator::config[2265]: No configuration found. Oct 8 19:43:01.370036 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:43:01.437827 systemd[1]: Reloading finished in 284 ms. Oct 8 19:43:01.501381 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:43:01.501490 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:43:01.502090 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:01.507291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:01.626498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:01.641415 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:43:01.695006 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:01.695006 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:43:01.695006 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:01.697951 kubelet[2305]: I1008 19:43:01.695876 2305 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:43:03.089407 kubelet[2305]: I1008 19:43:03.089332 2305 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:43:03.089407 kubelet[2305]: I1008 19:43:03.089372 2305 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:43:03.089835 kubelet[2305]: I1008 19:43:03.089671 2305 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:43:03.137164 kubelet[2305]: I1008 19:43:03.136868 2305 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:43:03.139699 kubelet[2305]: E1008 19:43:03.138749 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.170.239:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:03.148957 kubelet[2305]: E1008 19:43:03.148821 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:43:03.148957 kubelet[2305]: I1008 19:43:03.148860 2305 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:43:03.157893 kubelet[2305]: I1008 19:43:03.157020 2305 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:43:03.157893 kubelet[2305]: I1008 19:43:03.157475 2305 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:43:03.157893 kubelet[2305]: I1008 19:43:03.157612 2305 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:43:03.158155 kubelet[2305]: I1008 19:43:03.157638 2305 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-2-2-0-004c89fa14","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:43:03.158155 kubelet[2305]: I1008 19:43:03.158116 2305 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:43:03.158155 kubelet[2305]: I1008 19:43:03.158128 2305 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:43:03.158357 kubelet[2305]: I1008 19:43:03.158283 2305 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:03.163162 kubelet[2305]: I1008 19:43:03.163095 2305 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:43:03.163162 kubelet[2305]: I1008 19:43:03.163160 2305 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:43:03.163335 kubelet[2305]: I1008 19:43:03.163192 2305 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:43:03.163335 kubelet[2305]: I1008 19:43:03.163203 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:43:03.168698 kubelet[2305]: W1008 19:43:03.168395 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.170.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-0-004c89fa14&limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:03.170059 kubelet[2305]: E1008 19:43:03.168873 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.170.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-0-004c89fa14&limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:03.170059 kubelet[2305]: W1008 19:43:03.169613 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.170.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:03.170059 kubelet[2305]: E1008 19:43:03.169662 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.170.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:03.170059 kubelet[2305]: I1008 19:43:03.169824 2305 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:43:03.175415 kubelet[2305]: I1008 19:43:03.175366 2305 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:43:03.176953 kubelet[2305]: W1008 19:43:03.176900 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:43:03.178268 kubelet[2305]: I1008 19:43:03.178233 2305 server.go:1269] "Started kubelet" Oct 8 19:43:03.181429 kubelet[2305]: I1008 19:43:03.181384 2305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:43:03.182847 kubelet[2305]: I1008 19:43:03.182821 2305 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:43:03.183095 kubelet[2305]: I1008 19:43:03.183032 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:43:03.183396 kubelet[2305]: I1008 19:43:03.183370 2305 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:43:03.184892 kubelet[2305]: E1008 19:43:03.183524 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.170.239:6443/api/v1/namespaces/default/events\": dial tcp 188.245.170.239:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-2-2-0-004c89fa14.17fc91b9903b39e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-0-004c89fa14,UID:ci-3975-2-2-0-004c89fa14,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-0-004c89fa14,},FirstTimestamp:2024-10-08 19:43:03.178205666 +0000 UTC m=+1.532155332,LastTimestamp:2024-10-08 19:43:03.178205666 +0000 UTC m=+1.532155332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-0-004c89fa14,}" Oct 8 19:43:03.188211 kubelet[2305]: E1008 19:43:03.188185 2305 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:43:03.188605 kubelet[2305]: I1008 19:43:03.188581 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:43:03.189660 kubelet[2305]: I1008 19:43:03.188872 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:43:03.190994 kubelet[2305]: I1008 19:43:03.190977 2305 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:43:03.191693 kubelet[2305]: I1008 19:43:03.191673 2305 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:43:03.191826 kubelet[2305]: I1008 19:43:03.191814 2305 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:43:03.192376 kubelet[2305]: W1008 19:43:03.192336 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.170.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:03.192538 kubelet[2305]: E1008 19:43:03.192514 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.170.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:03.193217 kubelet[2305]: I1008 19:43:03.193195 2305 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:43:03.193384 kubelet[2305]: I1008 19:43:03.193366 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:43:03.194673 kubelet[2305]: E1008 19:43:03.194636 2305 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3975-2-2-0-004c89fa14\" not found" Oct 8 19:43:03.195139 kubelet[2305]: I1008 19:43:03.195121 2305 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:43:03.211564 kubelet[2305]: I1008 19:43:03.211461 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:43:03.213244 kubelet[2305]: I1008 19:43:03.213177 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:43:03.213244 kubelet[2305]: I1008 19:43:03.213220 2305 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:43:03.213244 kubelet[2305]: I1008 19:43:03.213244 2305 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:43:03.213448 kubelet[2305]: E1008 19:43:03.213297 2305 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:43:03.221079 kubelet[2305]: E1008 19:43:03.221011 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.170.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-0-004c89fa14?timeout=10s\": dial tcp 188.245.170.239:6443: connect: connection refused" interval="200ms" Oct 8 19:43:03.221486 kubelet[2305]: W1008 19:43:03.221296 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.170.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:03.221486 kubelet[2305]: E1008 19:43:03.221365 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.170.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:03.229242 kubelet[2305]: I1008 19:43:03.229217 2305 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:43:03.229242 kubelet[2305]: I1008 19:43:03.229237 2305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:43:03.229385 kubelet[2305]: I1008 19:43:03.229257 2305 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:03.231977 kubelet[2305]: I1008 19:43:03.231910 2305 policy_none.go:49] "None policy: Start" Oct 8 19:43:03.232819 kubelet[2305]: I1008 19:43:03.232693 2305 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:43:03.232965 kubelet[2305]: I1008 19:43:03.232838 2305 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:43:03.243386 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:43:03.253053 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:43:03.256130 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:43:03.265036 kubelet[2305]: I1008 19:43:03.264085 2305 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:43:03.265036 kubelet[2305]: I1008 19:43:03.264282 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:43:03.265036 kubelet[2305]: I1008 19:43:03.264314 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:43:03.265036 kubelet[2305]: I1008 19:43:03.264883 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:43:03.266883 kubelet[2305]: E1008 19:43:03.266803 2305 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-2-2-0-004c89fa14\" not found" Oct 8 19:43:03.328609 systemd[1]: Created slice kubepods-burstable-pod161c4de14102ba432cdc2ed989d78c3d.slice - libcontainer container kubepods-burstable-pod161c4de14102ba432cdc2ed989d78c3d.slice. Oct 8 19:43:03.346017 systemd[1]: Created slice kubepods-burstable-pod0f2377dde8a3ff5b6d034bc10174c79c.slice - libcontainer container kubepods-burstable-pod0f2377dde8a3ff5b6d034bc10174c79c.slice. Oct 8 19:43:03.360674 systemd[1]: Created slice kubepods-burstable-pod6ed01128afb4dca86862a2e64f324ade.slice - libcontainer container kubepods-burstable-pod6ed01128afb4dca86862a2e64f324ade.slice. Oct 8 19:43:03.367184 kubelet[2305]: I1008 19:43:03.367150 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.367836 kubelet[2305]: E1008 19:43:03.367799 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.170.239:6443/api/v1/nodes\": dial tcp 188.245.170.239:6443: connect: connection refused" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393292 kubelet[2305]: I1008 19:43:03.393202 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393292 kubelet[2305]: I1008 19:43:03.393257 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393292 kubelet[2305]: I1008 19:43:03.393291 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ed01128afb4dca86862a2e64f324ade-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-0-004c89fa14\" (UID: \"6ed01128afb4dca86862a2e64f324ade\") " pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393625 kubelet[2305]: I1008 19:43:03.393318 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393625 kubelet[2305]: I1008 19:43:03.393341 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393625 kubelet[2305]: I1008 19:43:03.393366 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393625 kubelet[2305]: I1008 19:43:03.393390 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f2377dde8a3ff5b6d034bc10174c79c-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-0-004c89fa14\" (UID: \"0f2377dde8a3ff5b6d034bc10174c79c\") " pod="kube-system/kube-scheduler-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393625 kubelet[2305]: I1008 19:43:03.393412 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ed01128afb4dca86862a2e64f324ade-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-0-004c89fa14\" (UID: \"6ed01128afb4dca86862a2e64f324ade\") " pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.393815 kubelet[2305]: I1008 19:43:03.393434 2305 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ed01128afb4dca86862a2e64f324ade-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-0-004c89fa14\" (UID: \"6ed01128afb4dca86862a2e64f324ade\") " pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.421742 kubelet[2305]: E1008 19:43:03.421649 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.170.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-0-004c89fa14?timeout=10s\": dial tcp 188.245.170.239:6443: connect: connection refused" interval="400ms" Oct 8 19:43:03.570531 kubelet[2305]: I1008 19:43:03.570469 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.571111 kubelet[2305]: E1008 19:43:03.571077 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.170.239:6443/api/v1/nodes\": dial tcp 188.245.170.239:6443: connect: connection refused" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.646506 containerd[1473]: time="2024-10-08T19:43:03.646322106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-0-004c89fa14,Uid:161c4de14102ba432cdc2ed989d78c3d,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:03.658289 containerd[1473]: time="2024-10-08T19:43:03.658229599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-0-004c89fa14,Uid:0f2377dde8a3ff5b6d034bc10174c79c,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:03.664885 containerd[1473]: time="2024-10-08T19:43:03.664517889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-0-004c89fa14,Uid:6ed01128afb4dca86862a2e64f324ade,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:03.822575 kubelet[2305]: E1008 19:43:03.822520 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.170.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-0-004c89fa14?timeout=10s\": dial tcp 188.245.170.239:6443: connect: connection refused" interval="800ms" Oct 8 19:43:03.974595 kubelet[2305]: I1008 19:43:03.974510 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:03.975575 kubelet[2305]: E1008 19:43:03.975421 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.170.239:6443/api/v1/nodes\": dial tcp 188.245.170.239:6443: connect: connection refused" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:04.055088 kubelet[2305]: W1008 19:43:04.054973 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.170.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:04.055264 kubelet[2305]: E1008 19:43:04.055107 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.170.239:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:04.150897 kubelet[2305]: W1008 19:43:04.150797 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.170.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:04.150897 kubelet[2305]: E1008 19:43:04.150876 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.170.239:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:04.215011 kubelet[2305]: W1008 19:43:04.214523 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.170.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-0-004c89fa14&limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:04.215011 kubelet[2305]: E1008 19:43:04.214949 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.170.239:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-0-004c89fa14&limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:04.227457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329946069.mount: Deactivated successfully. Oct 8 19:43:04.233231 containerd[1473]: time="2024-10-08T19:43:04.232972592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:04.234943 containerd[1473]: time="2024-10-08T19:43:04.234882688Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:43:04.235886 containerd[1473]: time="2024-10-08T19:43:04.235849416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:04.237558 containerd[1473]: time="2024-10-08T19:43:04.237487310Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:04.238591 containerd[1473]: time="2024-10-08T19:43:04.238558159Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:04.239191 containerd[1473]: time="2024-10-08T19:43:04.239156605Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Oct 8 19:43:04.240040 containerd[1473]: time="2024-10-08T19:43:04.240002532Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:43:04.242491 containerd[1473]: time="2024-10-08T19:43:04.242405392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:04.244448 containerd[1473]: time="2024-10-08T19:43:04.244405049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.380418ms" Oct 8 19:43:04.248186 containerd[1473]: time="2024-10-08T19:43:04.247961880Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.322831ms" Oct 8 19:43:04.249937 containerd[1473]: time="2024-10-08T19:43:04.249865376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.511776ms" Oct 8 19:43:04.424905 containerd[1473]: time="2024-10-08T19:43:04.424705787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:04.424905 containerd[1473]: time="2024-10-08T19:43:04.424864829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:04.424905 containerd[1473]: time="2024-10-08T19:43:04.424904029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:04.425091 containerd[1473]: time="2024-10-08T19:43:04.424961629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:04.427966 containerd[1473]: time="2024-10-08T19:43:04.427124688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:04.427966 containerd[1473]: time="2024-10-08T19:43:04.427340850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:04.427966 containerd[1473]: time="2024-10-08T19:43:04.427488691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:04.427966 containerd[1473]: time="2024-10-08T19:43:04.427531211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:04.429119 containerd[1473]: time="2024-10-08T19:43:04.428961023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:04.429222 containerd[1473]: time="2024-10-08T19:43:04.429141185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:04.429649 containerd[1473]: time="2024-10-08T19:43:04.429238386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:04.429649 containerd[1473]: time="2024-10-08T19:43:04.429256826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:04.452088 systemd[1]: Started cri-containerd-152291e65af2ac50c7bf5c1a6de88f272407fc43a8f134fe72da3a14deac0034.scope - libcontainer container 152291e65af2ac50c7bf5c1a6de88f272407fc43a8f134fe72da3a14deac0034. Oct 8 19:43:04.456818 systemd[1]: Started cri-containerd-1abbd0e511f102a2195285741cd564758172bac5234ef0ab91786be364bb2625.scope - libcontainer container 1abbd0e511f102a2195285741cd564758172bac5234ef0ab91786be364bb2625. Oct 8 19:43:04.461192 systemd[1]: Started cri-containerd-00a5af567b009195cc8042a5841696b10bd17cecf6896a5efa32529fc326342c.scope - libcontainer container 00a5af567b009195cc8042a5841696b10bd17cecf6896a5efa32529fc326342c. Oct 8 19:43:04.508474 containerd[1473]: time="2024-10-08T19:43:04.508184419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-0-004c89fa14,Uid:6ed01128afb4dca86862a2e64f324ade,Namespace:kube-system,Attempt:0,} returns sandbox id \"152291e65af2ac50c7bf5c1a6de88f272407fc43a8f134fe72da3a14deac0034\"" Oct 8 19:43:04.514778 containerd[1473]: time="2024-10-08T19:43:04.514552194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-0-004c89fa14,Uid:161c4de14102ba432cdc2ed989d78c3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1abbd0e511f102a2195285741cd564758172bac5234ef0ab91786be364bb2625\"" Oct 8 19:43:04.514778 containerd[1473]: time="2024-10-08T19:43:04.514618834Z" level=info msg="CreateContainer within sandbox \"152291e65af2ac50c7bf5c1a6de88f272407fc43a8f134fe72da3a14deac0034\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:43:04.523653 containerd[1473]: time="2024-10-08T19:43:04.523601751Z" level=info msg="CreateContainer within sandbox \"1abbd0e511f102a2195285741cd564758172bac5234ef0ab91786be364bb2625\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:43:04.537315 containerd[1473]: time="2024-10-08T19:43:04.537266867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-0-004c89fa14,Uid:0f2377dde8a3ff5b6d034bc10174c79c,Namespace:kube-system,Attempt:0,} returns sandbox id \"00a5af567b009195cc8042a5841696b10bd17cecf6896a5efa32529fc326342c\"" Oct 8 19:43:04.540956 containerd[1473]: time="2024-10-08T19:43:04.540903898Z" level=info msg="CreateContainer within sandbox \"00a5af567b009195cc8042a5841696b10bd17cecf6896a5efa32529fc326342c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:43:04.543634 containerd[1473]: time="2024-10-08T19:43:04.543549281Z" level=info msg="CreateContainer within sandbox \"152291e65af2ac50c7bf5c1a6de88f272407fc43a8f134fe72da3a14deac0034\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cba2353d8dd35c7e5fb7d36b29267f227c9331bad5845a0137870959bf8d9cce\"" Oct 8 19:43:04.557781 containerd[1473]: time="2024-10-08T19:43:04.557713482Z" level=info msg="CreateContainer within sandbox \"00a5af567b009195cc8042a5841696b10bd17cecf6896a5efa32529fc326342c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f4940165701e37561013662e256395689350a0ee6714734769ad521fdee5836d\"" Oct 8 19:43:04.558747 containerd[1473]: time="2024-10-08T19:43:04.558466048Z" level=info msg="StartContainer for \"cba2353d8dd35c7e5fb7d36b29267f227c9331bad5845a0137870959bf8d9cce\"" Oct 8 19:43:04.559975 containerd[1473]: time="2024-10-08T19:43:04.559823660Z" level=info msg="StartContainer for \"f4940165701e37561013662e256395689350a0ee6714734769ad521fdee5836d\"" Oct 8 19:43:04.562779 containerd[1473]: time="2024-10-08T19:43:04.562393802Z" level=info msg="CreateContainer within sandbox \"1abbd0e511f102a2195285741cd564758172bac5234ef0ab91786be364bb2625\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614\"" Oct 8 19:43:04.563852 containerd[1473]: time="2024-10-08T19:43:04.563809454Z" level=info msg="StartContainer for \"5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614\"" Oct 8 19:43:04.597386 systemd[1]: Started cri-containerd-5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614.scope - libcontainer container 5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614. Oct 8 19:43:04.602002 systemd[1]: Started cri-containerd-cba2353d8dd35c7e5fb7d36b29267f227c9331bad5845a0137870959bf8d9cce.scope - libcontainer container cba2353d8dd35c7e5fb7d36b29267f227c9331bad5845a0137870959bf8d9cce. Oct 8 19:43:04.619178 systemd[1]: Started cri-containerd-f4940165701e37561013662e256395689350a0ee6714734769ad521fdee5836d.scope - libcontainer container f4940165701e37561013662e256395689350a0ee6714734769ad521fdee5836d. Oct 8 19:43:04.624779 kubelet[2305]: E1008 19:43:04.623800 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.170.239:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-0-004c89fa14?timeout=10s\": dial tcp 188.245.170.239:6443: connect: connection refused" interval="1.6s" Oct 8 19:43:04.630288 kubelet[2305]: W1008 19:43:04.629534 2305 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.170.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.170.239:6443: connect: connection refused Oct 8 19:43:04.630288 kubelet[2305]: E1008 19:43:04.629608 2305 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.170.239:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.170.239:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:43:04.665738 containerd[1473]: time="2024-10-08T19:43:04.665698083Z" level=info msg="StartContainer for \"cba2353d8dd35c7e5fb7d36b29267f227c9331bad5845a0137870959bf8d9cce\" returns successfully" Oct 8 19:43:04.682782 containerd[1473]: time="2024-10-08T19:43:04.677097700Z" level=info msg="StartContainer for \"5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614\" returns successfully" Oct 8 19:43:04.698733 containerd[1473]: time="2024-10-08T19:43:04.696719467Z" level=info msg="StartContainer for \"f4940165701e37561013662e256395689350a0ee6714734769ad521fdee5836d\" returns successfully" Oct 8 19:43:04.778372 kubelet[2305]: I1008 19:43:04.777793 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:04.778372 kubelet[2305]: E1008 19:43:04.778233 2305 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://188.245.170.239:6443/api/v1/nodes\": dial tcp 188.245.170.239:6443: connect: connection refused" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:06.380400 kubelet[2305]: I1008 19:43:06.380362 2305 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:06.846315 kubelet[2305]: E1008 19:43:06.846270 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-2-2-0-004c89fa14\" not found" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:06.943715 kubelet[2305]: I1008 19:43:06.943669 2305 kubelet_node_status.go:75] "Successfully registered node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:07.172231 kubelet[2305]: I1008 19:43:07.172062 2305 apiserver.go:52] "Watching apiserver" Oct 8 19:43:07.191975 kubelet[2305]: I1008 19:43:07.191898 2305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:43:09.112254 systemd[1]: Reloading requested from client PID 2578 ('systemctl') (unit session-7.scope)... Oct 8 19:43:09.112322 systemd[1]: Reloading... Oct 8 19:43:09.228329 zram_generator::config[2623]: No configuration found. Oct 8 19:43:09.316207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:43:09.396383 systemd[1]: Reloading finished in 283 ms. Oct 8 19:43:09.440983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:09.454357 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:43:09.454671 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:09.454807 systemd[1]: kubelet.service: Consumed 1.934s CPU time, 118.3M memory peak, 0B memory swap peak. Oct 8 19:43:09.464258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:09.580022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:09.584795 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:43:09.640128 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:09.640128 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:43:09.640128 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:09.642203 kubelet[2660]: I1008 19:43:09.642001 2660 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:43:09.653574 kubelet[2660]: I1008 19:43:09.652647 2660 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:43:09.653574 kubelet[2660]: I1008 19:43:09.652684 2660 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:43:09.653574 kubelet[2660]: I1008 19:43:09.652996 2660 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:43:09.656058 kubelet[2660]: I1008 19:43:09.655982 2660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:43:09.659144 kubelet[2660]: I1008 19:43:09.659099 2660 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:43:09.663988 kubelet[2660]: E1008 19:43:09.663782 2660 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:43:09.663988 kubelet[2660]: I1008 19:43:09.663837 2660 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:43:09.666422 kubelet[2660]: I1008 19:43:09.666363 2660 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:43:09.666579 kubelet[2660]: I1008 19:43:09.666526 2660 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:43:09.666695 kubelet[2660]: I1008 19:43:09.666651 2660 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:43:09.666895 kubelet[2660]: I1008 19:43:09.666684 2660 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-2-2-0-004c89fa14","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:43:09.666895 kubelet[2660]: I1008 19:43:09.666880 2660 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:43:09.666895 kubelet[2660]: I1008 19:43:09.666888 2660 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:43:09.667124 kubelet[2660]: I1008 19:43:09.666970 2660 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:09.667124 kubelet[2660]: I1008 19:43:09.667090 2660 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:43:09.667124 kubelet[2660]: I1008 19:43:09.667103 2660 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:43:09.667694 kubelet[2660]: I1008 19:43:09.667123 2660 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:43:09.667694 kubelet[2660]: I1008 19:43:09.667604 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:43:09.670586 kubelet[2660]: I1008 19:43:09.670545 2660 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:43:09.671811 kubelet[2660]: I1008 19:43:09.671785 2660 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:43:09.674320 kubelet[2660]: I1008 19:43:09.674294 2660 server.go:1269] "Started kubelet" Oct 8 19:43:09.685980 kubelet[2660]: I1008 19:43:09.685127 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:43:09.687399 kubelet[2660]: I1008 19:43:09.687355 2660 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:43:09.695941 kubelet[2660]: I1008 19:43:09.693543 2660 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:43:09.695941 kubelet[2660]: I1008 19:43:09.694533 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:43:09.695941 kubelet[2660]: I1008 19:43:09.694805 2660 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:43:09.695941 kubelet[2660]: E1008 19:43:09.690776 2660 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3975-2-2-0-004c89fa14\" not found" Oct 8 19:43:09.695941 kubelet[2660]: I1008 19:43:09.688501 2660 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:43:09.695941 kubelet[2660]: I1008 19:43:09.690360 2660 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:43:09.695941 kubelet[2660]: I1008 19:43:09.690373 2660 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:43:09.695941 kubelet[2660]: I1008 19:43:09.695404 2660 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:43:09.696576 kubelet[2660]: I1008 19:43:09.696545 2660 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:43:09.705081 kubelet[2660]: I1008 19:43:09.704319 2660 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:43:09.705284 kubelet[2660]: I1008 19:43:09.705268 2660 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:43:09.714386 kubelet[2660]: I1008 19:43:09.713095 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:43:09.714667 kubelet[2660]: I1008 19:43:09.714615 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:43:09.714667 kubelet[2660]: I1008 19:43:09.714665 2660 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:43:09.714745 kubelet[2660]: I1008 19:43:09.714690 2660 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:43:09.714786 kubelet[2660]: E1008 19:43:09.714759 2660 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:43:09.774313 kubelet[2660]: I1008 19:43:09.774235 2660 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:43:09.774313 kubelet[2660]: I1008 19:43:09.774306 2660 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:43:09.774473 kubelet[2660]: I1008 19:43:09.774341 2660 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:09.774514 kubelet[2660]: I1008 19:43:09.774499 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:43:09.774544 kubelet[2660]: I1008 19:43:09.774514 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:43:09.774544 kubelet[2660]: I1008 19:43:09.774533 2660 policy_none.go:49] "None policy: Start" Oct 8 19:43:09.775285 kubelet[2660]: I1008 19:43:09.775258 2660 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:43:09.775330 kubelet[2660]: I1008 19:43:09.775296 2660 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:43:09.775486 kubelet[2660]: I1008 19:43:09.775472 2660 state_mem.go:75] "Updated machine memory state" Oct 8 19:43:09.779965 kubelet[2660]: I1008 19:43:09.779488 2660 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:43:09.779965 kubelet[2660]: I1008 19:43:09.779685 2660 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:43:09.779965 kubelet[2660]: I1008 19:43:09.779704 2660 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:43:09.780148 kubelet[2660]: I1008 19:43:09.780019 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:43:09.885615 kubelet[2660]: I1008 19:43:09.885503 2660 kubelet_node_status.go:72] "Attempting to register node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.895058 kubelet[2660]: I1008 19:43:09.895030 2660 kubelet_node_status.go:111] "Node was previously registered" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.895504 kubelet[2660]: I1008 19:43:09.895491 2660 kubelet_node_status.go:75] "Successfully registered node" node="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.895969 kubelet[2660]: I1008 19:43:09.895878 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f2377dde8a3ff5b6d034bc10174c79c-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-0-004c89fa14\" (UID: \"0f2377dde8a3ff5b6d034bc10174c79c\") " pod="kube-system/kube-scheduler-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.895969 kubelet[2660]: I1008 19:43:09.895905 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ed01128afb4dca86862a2e64f324ade-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-0-004c89fa14\" (UID: \"6ed01128afb4dca86862a2e64f324ade\") " pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.895969 kubelet[2660]: I1008 19:43:09.895942 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ed01128afb4dca86862a2e64f324ade-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-0-004c89fa14\" (UID: \"6ed01128afb4dca86862a2e64f324ade\") " pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.896261 kubelet[2660]: I1008 19:43:09.896185 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.896261 kubelet[2660]: I1008 19:43:09.896215 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.896261 kubelet[2660]: I1008 19:43:09.896233 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ed01128afb4dca86862a2e64f324ade-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-0-004c89fa14\" (UID: \"6ed01128afb4dca86862a2e64f324ade\") " pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.896547 kubelet[2660]: I1008 19:43:09.896442 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.896547 kubelet[2660]: I1008 19:43:09.896467 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:09.896547 kubelet[2660]: I1008 19:43:09.896482 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/161c4de14102ba432cdc2ed989d78c3d-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-0-004c89fa14\" (UID: \"161c4de14102ba432cdc2ed989d78c3d\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:10.668556 kubelet[2660]: I1008 19:43:10.668504 2660 apiserver.go:52] "Watching apiserver" Oct 8 19:43:10.695637 kubelet[2660]: I1008 19:43:10.695556 2660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:43:10.765959 kubelet[2660]: E1008 19:43:10.763740 2660 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-2-2-0-004c89fa14\" already exists" pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:10.765959 kubelet[2660]: E1008 19:43:10.763791 2660 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975-2-2-0-004c89fa14\" already exists" pod="kube-system/kube-scheduler-ci-3975-2-2-0-004c89fa14" Oct 8 19:43:10.792083 kubelet[2660]: I1008 19:43:10.791905 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-2-2-0-004c89fa14" podStartSLOduration=1.791883963 podStartE2EDuration="1.791883963s" podCreationTimestamp="2024-10-08 19:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:10.779664135 +0000 UTC m=+1.191594348" watchObservedRunningTime="2024-10-08 19:43:10.791883963 +0000 UTC m=+1.203814136" Oct 8 19:43:10.805070 kubelet[2660]: I1008 19:43:10.804849 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-2-2-0-004c89fa14" podStartSLOduration=1.80482788 podStartE2EDuration="1.80482788s" podCreationTimestamp="2024-10-08 19:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:10.803652626 +0000 UTC m=+1.215582879" watchObservedRunningTime="2024-10-08 19:43:10.80482788 +0000 UTC m=+1.216758053" Oct 8 19:43:10.805070 kubelet[2660]: I1008 19:43:10.804962 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-2-2-0-004c89fa14" podStartSLOduration=1.804957682 podStartE2EDuration="1.804957682s" podCreationTimestamp="2024-10-08 19:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:10.792533771 +0000 UTC m=+1.204463984" watchObservedRunningTime="2024-10-08 19:43:10.804957682 +0000 UTC m=+1.216887855" Oct 8 19:43:15.130477 kubelet[2660]: I1008 19:43:15.130052 2660 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:43:15.130906 containerd[1473]: time="2024-10-08T19:43:15.130396327Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:43:15.132003 kubelet[2660]: I1008 19:43:15.131835 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:43:15.260055 sudo[1851]: pam_unix(sudo:session): session closed for user root Oct 8 19:43:15.421483 sshd[1848]: pam_unix(sshd:session): session closed for user core Oct 8 19:43:15.426445 systemd[1]: sshd@6-188.245.170.239:22-139.178.89.65:42262.service: Deactivated successfully. Oct 8 19:43:15.429459 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:43:15.429724 systemd[1]: session-7.scope: Consumed 5.331s CPU time, 102.7M memory peak, 0B memory swap peak. Oct 8 19:43:15.431494 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:43:15.433209 systemd-logind[1454]: Removed session 7. Oct 8 19:43:16.039879 systemd[1]: Created slice kubepods-besteffort-podb64d76d5_e367_46cb_b227_dc1b6c13a297.slice - libcontainer container kubepods-besteffort-podb64d76d5_e367_46cb_b227_dc1b6c13a297.slice. Oct 8 19:43:16.042287 kubelet[2660]: I1008 19:43:16.041479 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b64d76d5-e367-46cb-b227-dc1b6c13a297-kube-proxy\") pod \"kube-proxy-j44wx\" (UID: \"b64d76d5-e367-46cb-b227-dc1b6c13a297\") " pod="kube-system/kube-proxy-j44wx" Oct 8 19:43:16.042287 kubelet[2660]: I1008 19:43:16.041522 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b64d76d5-e367-46cb-b227-dc1b6c13a297-xtables-lock\") pod \"kube-proxy-j44wx\" (UID: \"b64d76d5-e367-46cb-b227-dc1b6c13a297\") " pod="kube-system/kube-proxy-j44wx" Oct 8 19:43:16.042287 kubelet[2660]: I1008 19:43:16.042016 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqg57\" (UniqueName: \"kubernetes.io/projected/b64d76d5-e367-46cb-b227-dc1b6c13a297-kube-api-access-vqg57\") pod \"kube-proxy-j44wx\" (UID: \"b64d76d5-e367-46cb-b227-dc1b6c13a297\") " pod="kube-system/kube-proxy-j44wx" Oct 8 19:43:16.042287 kubelet[2660]: I1008 19:43:16.042205 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b64d76d5-e367-46cb-b227-dc1b6c13a297-lib-modules\") pod \"kube-proxy-j44wx\" (UID: \"b64d76d5-e367-46cb-b227-dc1b6c13a297\") " pod="kube-system/kube-proxy-j44wx" Oct 8 19:43:16.251230 systemd[1]: Created slice kubepods-besteffort-podfcee5f5f_ce4d_4202_a82e_ba091b6947d3.slice - libcontainer container kubepods-besteffort-podfcee5f5f_ce4d_4202_a82e_ba091b6947d3.slice. Oct 8 19:43:16.344839 kubelet[2660]: I1008 19:43:16.344623 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fcee5f5f-ce4d-4202-a82e-ba091b6947d3-var-lib-calico\") pod \"tigera-operator-55748b469f-gl9vq\" (UID: \"fcee5f5f-ce4d-4202-a82e-ba091b6947d3\") " pod="tigera-operator/tigera-operator-55748b469f-gl9vq" Oct 8 19:43:16.344839 kubelet[2660]: I1008 19:43:16.344717 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7jbf\" (UniqueName: \"kubernetes.io/projected/fcee5f5f-ce4d-4202-a82e-ba091b6947d3-kube-api-access-g7jbf\") pod \"tigera-operator-55748b469f-gl9vq\" (UID: \"fcee5f5f-ce4d-4202-a82e-ba091b6947d3\") " pod="tigera-operator/tigera-operator-55748b469f-gl9vq" Oct 8 19:43:16.358391 containerd[1473]: time="2024-10-08T19:43:16.358300722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j44wx,Uid:b64d76d5-e367-46cb-b227-dc1b6c13a297,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:16.387464 containerd[1473]: time="2024-10-08T19:43:16.387198559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:16.387464 containerd[1473]: time="2024-10-08T19:43:16.387342681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:16.387464 containerd[1473]: time="2024-10-08T19:43:16.387392042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:16.387464 containerd[1473]: time="2024-10-08T19:43:16.387424562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:16.415305 systemd[1]: Started cri-containerd-5bf86f2d1db4de5b3d09e63c4c6e834466635d2720d815346a7eb3e1aeae6164.scope - libcontainer container 5bf86f2d1db4de5b3d09e63c4c6e834466635d2720d815346a7eb3e1aeae6164. Oct 8 19:43:16.442017 containerd[1473]: time="2024-10-08T19:43:16.441940186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j44wx,Uid:b64d76d5-e367-46cb-b227-dc1b6c13a297,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bf86f2d1db4de5b3d09e63c4c6e834466635d2720d815346a7eb3e1aeae6164\"" Oct 8 19:43:16.447377 containerd[1473]: time="2024-10-08T19:43:16.447234426Z" level=info msg="CreateContainer within sandbox \"5bf86f2d1db4de5b3d09e63c4c6e834466635d2720d815346a7eb3e1aeae6164\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:43:16.470215 containerd[1473]: time="2024-10-08T19:43:16.470159572Z" level=info msg="CreateContainer within sandbox \"5bf86f2d1db4de5b3d09e63c4c6e834466635d2720d815346a7eb3e1aeae6164\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6accf2fe8be480b74ea1cd84024fbed7a8402524ee615e44ffb2d9c779216b6f\"" Oct 8 19:43:16.471333 containerd[1473]: time="2024-10-08T19:43:16.471181428Z" level=info msg="StartContainer for \"6accf2fe8be480b74ea1cd84024fbed7a8402524ee615e44ffb2d9c779216b6f\"" Oct 8 19:43:16.502249 systemd[1]: Started cri-containerd-6accf2fe8be480b74ea1cd84024fbed7a8402524ee615e44ffb2d9c779216b6f.scope - libcontainer container 6accf2fe8be480b74ea1cd84024fbed7a8402524ee615e44ffb2d9c779216b6f. Oct 8 19:43:16.540837 containerd[1473]: time="2024-10-08T19:43:16.539712063Z" level=info msg="StartContainer for \"6accf2fe8be480b74ea1cd84024fbed7a8402524ee615e44ffb2d9c779216b6f\" returns successfully" Oct 8 19:43:16.557811 containerd[1473]: time="2024-10-08T19:43:16.557376770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-gl9vq,Uid:fcee5f5f-ce4d-4202-a82e-ba091b6947d3,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:43:16.589526 containerd[1473]: time="2024-10-08T19:43:16.588960488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:16.589773 containerd[1473]: time="2024-10-08T19:43:16.589706099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:16.589773 containerd[1473]: time="2024-10-08T19:43:16.589731299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:16.590002 containerd[1473]: time="2024-10-08T19:43:16.589742939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:16.610150 systemd[1]: Started cri-containerd-fa30be7bb6c4189504bdd648064c91fa31bb61de085a6a9954833f331ccf9773.scope - libcontainer container fa30be7bb6c4189504bdd648064c91fa31bb61de085a6a9954833f331ccf9773. Oct 8 19:43:16.652093 containerd[1473]: time="2024-10-08T19:43:16.650462177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-gl9vq,Uid:fcee5f5f-ce4d-4202-a82e-ba091b6947d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fa30be7bb6c4189504bdd648064c91fa31bb61de085a6a9954833f331ccf9773\"" Oct 8 19:43:16.652093 containerd[1473]: time="2024-10-08T19:43:16.652012560Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:43:17.045236 kubelet[2660]: I1008 19:43:17.045155 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j44wx" podStartSLOduration=1.045135 podStartE2EDuration="1.045135s" podCreationTimestamp="2024-10-08 19:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:16.784011435 +0000 UTC m=+7.195941608" watchObservedRunningTime="2024-10-08 19:43:17.045135 +0000 UTC m=+7.457065173" Oct 8 19:43:18.304798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount842481341.mount: Deactivated successfully. Oct 8 19:43:18.617152 containerd[1473]: time="2024-10-08T19:43:18.617000394Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:18.618159 containerd[1473]: time="2024-10-08T19:43:18.618122691Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485903" Oct 8 19:43:18.619141 containerd[1473]: time="2024-10-08T19:43:18.619086227Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:18.621487 containerd[1473]: time="2024-10-08T19:43:18.621433344Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:18.622692 containerd[1473]: time="2024-10-08T19:43:18.622201917Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.970143836s" Oct 8 19:43:18.622692 containerd[1473]: time="2024-10-08T19:43:18.622236237Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:43:18.624872 containerd[1473]: time="2024-10-08T19:43:18.624830679Z" level=info msg="CreateContainer within sandbox \"fa30be7bb6c4189504bdd648064c91fa31bb61de085a6a9954833f331ccf9773\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:43:18.648013 containerd[1473]: time="2024-10-08T19:43:18.647962888Z" level=info msg="CreateContainer within sandbox \"fa30be7bb6c4189504bdd648064c91fa31bb61de085a6a9954833f331ccf9773\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c\"" Oct 8 19:43:18.648826 containerd[1473]: time="2024-10-08T19:43:18.648801582Z" level=info msg="StartContainer for \"b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c\"" Oct 8 19:43:18.672094 systemd[1]: Started cri-containerd-b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c.scope - libcontainer container b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c. Oct 8 19:43:18.698318 containerd[1473]: time="2024-10-08T19:43:18.698259012Z" level=info msg="StartContainer for \"b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c\" returns successfully" Oct 8 19:43:20.330728 kubelet[2660]: I1008 19:43:20.330497 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-gl9vq" podStartSLOduration=2.358950729 podStartE2EDuration="4.330479026s" podCreationTimestamp="2024-10-08 19:43:16 +0000 UTC" firstStartedPulling="2024-10-08 19:43:16.651585194 +0000 UTC m=+7.063515367" lastFinishedPulling="2024-10-08 19:43:18.623113531 +0000 UTC m=+9.035043664" observedRunningTime="2024-10-08 19:43:18.790449206 +0000 UTC m=+9.202379419" watchObservedRunningTime="2024-10-08 19:43:20.330479026 +0000 UTC m=+10.742409199" Oct 8 19:43:22.786205 systemd[1]: Created slice kubepods-besteffort-podbd1b601b_6fc4_4afd_aaaf_25189f0d81e9.slice - libcontainer container kubepods-besteffort-podbd1b601b_6fc4_4afd_aaaf_25189f0d81e9.slice. Oct 8 19:43:22.886023 systemd[1]: Created slice kubepods-besteffort-pod4e11d726_78c3_46dc_bc24_0f457aa9c8ec.slice - libcontainer container kubepods-besteffort-pod4e11d726_78c3_46dc_bc24_0f457aa9c8ec.slice. Oct 8 19:43:22.888416 kubelet[2660]: I1008 19:43:22.887675 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd1b601b-6fc4-4afd-aaaf-25189f0d81e9-tigera-ca-bundle\") pod \"calico-typha-76d85db5f9-kl6k2\" (UID: \"bd1b601b-6fc4-4afd-aaaf-25189f0d81e9\") " pod="calico-system/calico-typha-76d85db5f9-kl6k2" Oct 8 19:43:22.888416 kubelet[2660]: I1008 19:43:22.887720 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bd1b601b-6fc4-4afd-aaaf-25189f0d81e9-typha-certs\") pod \"calico-typha-76d85db5f9-kl6k2\" (UID: \"bd1b601b-6fc4-4afd-aaaf-25189f0d81e9\") " pod="calico-system/calico-typha-76d85db5f9-kl6k2" Oct 8 19:43:22.888416 kubelet[2660]: I1008 19:43:22.887737 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d74l\" (UniqueName: \"kubernetes.io/projected/bd1b601b-6fc4-4afd-aaaf-25189f0d81e9-kube-api-access-6d74l\") pod \"calico-typha-76d85db5f9-kl6k2\" (UID: \"bd1b601b-6fc4-4afd-aaaf-25189f0d81e9\") " pod="calico-system/calico-typha-76d85db5f9-kl6k2" Oct 8 19:43:22.990132 kubelet[2660]: I1008 19:43:22.988531 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-tigera-ca-bundle\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.990508 kubelet[2660]: I1008 19:43:22.990483 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-lib-modules\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.990618 kubelet[2660]: I1008 19:43:22.990605 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-var-run-calico\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.990746 kubelet[2660]: I1008 19:43:22.990683 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-flexvol-driver-host\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.990746 kubelet[2660]: I1008 19:43:22.990707 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-var-lib-calico\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.990746 kubelet[2660]: I1008 19:43:22.990725 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-cni-bin-dir\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.990911 kubelet[2660]: I1008 19:43:22.990864 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-node-certs\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.990911 kubelet[2660]: I1008 19:43:22.990886 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-cni-net-dir\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.991095 kubelet[2660]: I1008 19:43:22.990902 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnt8m\" (UniqueName: \"kubernetes.io/projected/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-kube-api-access-lnt8m\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.991095 kubelet[2660]: I1008 19:43:22.991051 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-policysync\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.991095 kubelet[2660]: I1008 19:43:22.991069 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-xtables-lock\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:22.991259 kubelet[2660]: I1008 19:43:22.991084 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4e11d726-78c3-46dc-bc24-0f457aa9c8ec-cni-log-dir\") pod \"calico-node-cdlfc\" (UID: \"4e11d726-78c3-46dc-bc24-0f457aa9c8ec\") " pod="calico-system/calico-node-cdlfc" Oct 8 19:43:23.021964 kubelet[2660]: E1008 19:43:23.020367 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:23.092005 kubelet[2660]: I1008 19:43:23.091860 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/36d90f88-3457-4191-84d5-72e6469f1596-varrun\") pod \"csi-node-driver-4qp66\" (UID: \"36d90f88-3457-4191-84d5-72e6469f1596\") " pod="calico-system/csi-node-driver-4qp66" Oct 8 19:43:23.092868 kubelet[2660]: I1008 19:43:23.092694 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36d90f88-3457-4191-84d5-72e6469f1596-kubelet-dir\") pod \"csi-node-driver-4qp66\" (UID: \"36d90f88-3457-4191-84d5-72e6469f1596\") " pod="calico-system/csi-node-driver-4qp66" Oct 8 19:43:23.092868 kubelet[2660]: I1008 19:43:23.092756 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/36d90f88-3457-4191-84d5-72e6469f1596-socket-dir\") pod \"csi-node-driver-4qp66\" (UID: \"36d90f88-3457-4191-84d5-72e6469f1596\") " pod="calico-system/csi-node-driver-4qp66" Oct 8 19:43:23.092868 kubelet[2660]: I1008 19:43:23.092797 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/36d90f88-3457-4191-84d5-72e6469f1596-registration-dir\") pod \"csi-node-driver-4qp66\" (UID: \"36d90f88-3457-4191-84d5-72e6469f1596\") " pod="calico-system/csi-node-driver-4qp66" Oct 8 19:43:23.094369 kubelet[2660]: I1008 19:43:23.094059 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsz9\" (UniqueName: \"kubernetes.io/projected/36d90f88-3457-4191-84d5-72e6469f1596-kube-api-access-jdsz9\") pod \"csi-node-driver-4qp66\" (UID: \"36d90f88-3457-4191-84d5-72e6469f1596\") " pod="calico-system/csi-node-driver-4qp66" Oct 8 19:43:23.103934 containerd[1473]: time="2024-10-08T19:43:23.102949726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d85db5f9-kl6k2,Uid:bd1b601b-6fc4-4afd-aaaf-25189f0d81e9,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:23.106181 kubelet[2660]: E1008 19:43:23.105098 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.106181 kubelet[2660]: W1008 19:43:23.105212 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.106181 kubelet[2660]: E1008 19:43:23.105833 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.122669 kubelet[2660]: E1008 19:43:23.122584 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.122669 kubelet[2660]: W1008 19:43:23.122608 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.122669 kubelet[2660]: E1008 19:43:23.122629 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.145211 containerd[1473]: time="2024-10-08T19:43:23.145115043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:23.145402 containerd[1473]: time="2024-10-08T19:43:23.145380048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:23.145531 containerd[1473]: time="2024-10-08T19:43:23.145476049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:23.145640 containerd[1473]: time="2024-10-08T19:43:23.145520330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:23.168911 systemd[1]: Started cri-containerd-7a13dc9a1919ce46c8848876383cb0cfd68b759448efff7495493d5e93706f76.scope - libcontainer container 7a13dc9a1919ce46c8848876383cb0cfd68b759448efff7495493d5e93706f76. Oct 8 19:43:23.191468 containerd[1473]: time="2024-10-08T19:43:23.191383513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdlfc,Uid:4e11d726-78c3-46dc-bc24-0f457aa9c8ec,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:23.202454 kubelet[2660]: E1008 19:43:23.200841 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.202454 kubelet[2660]: W1008 19:43:23.200867 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.202454 kubelet[2660]: E1008 19:43:23.200886 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.202454 kubelet[2660]: E1008 19:43:23.201187 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.202454 kubelet[2660]: W1008 19:43:23.201202 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.202454 kubelet[2660]: E1008 19:43:23.201216 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.202454 kubelet[2660]: E1008 19:43:23.201376 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.202454 kubelet[2660]: W1008 19:43:23.201390 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.202454 kubelet[2660]: E1008 19:43:23.201400 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.202454 kubelet[2660]: E1008 19:43:23.201551 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.202770 kubelet[2660]: W1008 19:43:23.201559 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.202770 kubelet[2660]: E1008 19:43:23.201568 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.202770 kubelet[2660]: E1008 19:43:23.201752 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.202770 kubelet[2660]: W1008 19:43:23.201762 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.202770 kubelet[2660]: E1008 19:43:23.201770 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.203682 kubelet[2660]: E1008 19:43:23.203459 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.203682 kubelet[2660]: W1008 19:43:23.203479 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.203682 kubelet[2660]: E1008 19:43:23.203496 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.204034 kubelet[2660]: E1008 19:43:23.203857 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.204034 kubelet[2660]: W1008 19:43:23.203872 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.204034 kubelet[2660]: E1008 19:43:23.203892 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.204173 kubelet[2660]: E1008 19:43:23.204148 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.204173 kubelet[2660]: W1008 19:43:23.204169 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.206165 kubelet[2660]: E1008 19:43:23.204197 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.206165 kubelet[2660]: E1008 19:43:23.204534 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.206165 kubelet[2660]: W1008 19:43:23.204546 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.206165 kubelet[2660]: E1008 19:43:23.204563 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.206712 kubelet[2660]: E1008 19:43:23.206477 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.206712 kubelet[2660]: W1008 19:43:23.206498 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.206712 kubelet[2660]: E1008 19:43:23.206532 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.207482 kubelet[2660]: E1008 19:43:23.207250 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.207482 kubelet[2660]: W1008 19:43:23.207276 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.207482 kubelet[2660]: E1008 19:43:23.207298 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.207734 kubelet[2660]: E1008 19:43:23.207701 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.207734 kubelet[2660]: W1008 19:43:23.207720 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.207734 kubelet[2660]: E1008 19:43:23.207735 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.209422 kubelet[2660]: E1008 19:43:23.208983 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.209422 kubelet[2660]: W1008 19:43:23.209001 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.209422 kubelet[2660]: E1008 19:43:23.209106 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.209422 kubelet[2660]: E1008 19:43:23.209317 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.209422 kubelet[2660]: W1008 19:43:23.209326 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.209422 kubelet[2660]: E1008 19:43:23.209399 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.210068 kubelet[2660]: E1008 19:43:23.209879 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.210068 kubelet[2660]: W1008 19:43:23.209898 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.210138 kubelet[2660]: E1008 19:43:23.209995 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.210325 kubelet[2660]: E1008 19:43:23.210305 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.210373 kubelet[2660]: W1008 19:43:23.210325 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.211310 kubelet[2660]: E1008 19:43:23.210961 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.211310 kubelet[2660]: E1008 19:43:23.211115 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.211310 kubelet[2660]: W1008 19:43:23.211126 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.211310 kubelet[2660]: E1008 19:43:23.211303 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.211310 kubelet[2660]: W1008 19:43:23.211313 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.212577 kubelet[2660]: E1008 19:43:23.211568 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.212577 kubelet[2660]: E1008 19:43:23.211612 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.212577 kubelet[2660]: E1008 19:43:23.212033 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.212577 kubelet[2660]: W1008 19:43:23.212061 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.212577 kubelet[2660]: E1008 19:43:23.212150 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.212577 kubelet[2660]: E1008 19:43:23.212289 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.212577 kubelet[2660]: W1008 19:43:23.212299 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.212577 kubelet[2660]: E1008 19:43:23.212372 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.214045 kubelet[2660]: E1008 19:43:23.212997 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.214045 kubelet[2660]: W1008 19:43:23.213011 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.214045 kubelet[2660]: E1008 19:43:23.213051 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.214045 kubelet[2660]: E1008 19:43:23.213242 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.214045 kubelet[2660]: W1008 19:43:23.213251 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.214045 kubelet[2660]: E1008 19:43:23.213284 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.214333 kubelet[2660]: E1008 19:43:23.214308 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.214333 kubelet[2660]: W1008 19:43:23.214331 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.214405 kubelet[2660]: E1008 19:43:23.214348 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.215118 kubelet[2660]: E1008 19:43:23.215093 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.215118 kubelet[2660]: W1008 19:43:23.215113 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.215118 kubelet[2660]: E1008 19:43:23.215129 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.215118 kubelet[2660]: E1008 19:43:23.215409 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.215118 kubelet[2660]: W1008 19:43:23.215423 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.215118 kubelet[2660]: E1008 19:43:23.215437 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.223743 kubelet[2660]: E1008 19:43:23.223717 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.223899 kubelet[2660]: W1008 19:43:23.223884 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.224011 kubelet[2660]: E1008 19:43:23.223986 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.237129 containerd[1473]: time="2024-10-08T19:43:23.236825968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:23.237129 containerd[1473]: time="2024-10-08T19:43:23.236977291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:23.237129 containerd[1473]: time="2024-10-08T19:43:23.236994491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:23.237129 containerd[1473]: time="2024-10-08T19:43:23.237059932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:23.242059 containerd[1473]: time="2024-10-08T19:43:23.241885059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d85db5f9-kl6k2,Uid:bd1b601b-6fc4-4afd-aaaf-25189f0d81e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a13dc9a1919ce46c8848876383cb0cfd68b759448efff7495493d5e93706f76\"" Oct 8 19:43:23.244964 containerd[1473]: time="2024-10-08T19:43:23.244799191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:43:23.263300 systemd[1]: Started cri-containerd-2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed.scope - libcontainer container 2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed. Oct 8 19:43:23.308745 containerd[1473]: time="2024-10-08T19:43:23.308695057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdlfc,Uid:4e11d726-78c3-46dc-bc24-0f457aa9c8ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed\"" Oct 8 19:43:23.974853 kubelet[2660]: E1008 19:43:23.974810 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.974853 kubelet[2660]: W1008 19:43:23.974836 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.975388 kubelet[2660]: E1008 19:43:23.974869 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.975388 kubelet[2660]: E1008 19:43:23.975116 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.975388 kubelet[2660]: W1008 19:43:23.975126 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.975388 kubelet[2660]: E1008 19:43:23.975137 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.975388 kubelet[2660]: E1008 19:43:23.975319 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.975388 kubelet[2660]: W1008 19:43:23.975328 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.975388 kubelet[2660]: E1008 19:43:23.975338 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.975811 kubelet[2660]: E1008 19:43:23.975688 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.975811 kubelet[2660]: W1008 19:43:23.975700 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.975811 kubelet[2660]: E1008 19:43:23.975713 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.976056 kubelet[2660]: E1008 19:43:23.975991 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.976056 kubelet[2660]: W1008 19:43:23.976002 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.976056 kubelet[2660]: E1008 19:43:23.976013 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.976404 kubelet[2660]: E1008 19:43:23.976377 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.976404 kubelet[2660]: W1008 19:43:23.976396 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.977176 kubelet[2660]: E1008 19:43:23.976409 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.977176 kubelet[2660]: E1008 19:43:23.976621 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.977176 kubelet[2660]: W1008 19:43:23.976630 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.977176 kubelet[2660]: E1008 19:43:23.976640 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.977176 kubelet[2660]: E1008 19:43:23.976822 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.977176 kubelet[2660]: W1008 19:43:23.976840 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.977176 kubelet[2660]: E1008 19:43:23.976850 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.977176 kubelet[2660]: E1008 19:43:23.977047 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.977176 kubelet[2660]: W1008 19:43:23.977055 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.977176 kubelet[2660]: E1008 19:43:23.977066 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.977859 kubelet[2660]: E1008 19:43:23.977254 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.977859 kubelet[2660]: W1008 19:43:23.977263 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.977859 kubelet[2660]: E1008 19:43:23.977272 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.977859 kubelet[2660]: E1008 19:43:23.977641 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.977859 kubelet[2660]: W1008 19:43:23.977662 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.977859 kubelet[2660]: E1008 19:43:23.977674 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.978868 kubelet[2660]: E1008 19:43:23.978349 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.978868 kubelet[2660]: W1008 19:43:23.978382 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.978868 kubelet[2660]: E1008 19:43:23.978394 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.978868 kubelet[2660]: E1008 19:43:23.978605 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.978868 kubelet[2660]: W1008 19:43:23.978613 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.978868 kubelet[2660]: E1008 19:43:23.978624 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.978868 kubelet[2660]: E1008 19:43:23.978835 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.978868 kubelet[2660]: W1008 19:43:23.978858 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.978868 kubelet[2660]: E1008 19:43:23.978868 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:23.979930 kubelet[2660]: E1008 19:43:23.979166 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:23.979930 kubelet[2660]: W1008 19:43:23.979176 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:23.979930 kubelet[2660]: E1008 19:43:23.979187 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:24.716191 kubelet[2660]: E1008 19:43:24.716106 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:25.655049 containerd[1473]: time="2024-10-08T19:43:25.654986198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:25.656090 containerd[1473]: time="2024-10-08T19:43:25.655866895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:43:25.657057 containerd[1473]: time="2024-10-08T19:43:25.657010316Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:25.661318 containerd[1473]: time="2024-10-08T19:43:25.660442900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:25.661762 containerd[1473]: time="2024-10-08T19:43:25.661532680Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.416693489s" Oct 8 19:43:25.661762 containerd[1473]: time="2024-10-08T19:43:25.661593121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:43:25.665162 containerd[1473]: time="2024-10-08T19:43:25.664993705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:43:25.683426 containerd[1473]: time="2024-10-08T19:43:25.683380567Z" level=info msg="CreateContainer within sandbox \"7a13dc9a1919ce46c8848876383cb0cfd68b759448efff7495493d5e93706f76\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:43:25.709139 containerd[1473]: time="2024-10-08T19:43:25.708994045Z" level=info msg="CreateContainer within sandbox \"7a13dc9a1919ce46c8848876383cb0cfd68b759448efff7495493d5e93706f76\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d8a9251550fae6f0d4bc533bf5eab9278f61fc71d5ba420c0fef50e0688fc455\"" Oct 8 19:43:25.710779 containerd[1473]: time="2024-10-08T19:43:25.709765179Z" level=info msg="StartContainer for \"d8a9251550fae6f0d4bc533bf5eab9278f61fc71d5ba420c0fef50e0688fc455\"" Oct 8 19:43:25.748161 systemd[1]: Started cri-containerd-d8a9251550fae6f0d4bc533bf5eab9278f61fc71d5ba420c0fef50e0688fc455.scope - libcontainer container d8a9251550fae6f0d4bc533bf5eab9278f61fc71d5ba420c0fef50e0688fc455. Oct 8 19:43:25.792045 containerd[1473]: time="2024-10-08T19:43:25.792002592Z" level=info msg="StartContainer for \"d8a9251550fae6f0d4bc533bf5eab9278f61fc71d5ba420c0fef50e0688fc455\" returns successfully" Oct 8 19:43:25.894444 kubelet[2660]: E1008 19:43:25.894408 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.894444 kubelet[2660]: W1008 19:43:25.894436 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.894840 kubelet[2660]: E1008 19:43:25.894459 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.894840 kubelet[2660]: E1008 19:43:25.894598 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.894840 kubelet[2660]: W1008 19:43:25.894605 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.894840 kubelet[2660]: E1008 19:43:25.894613 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.894840 kubelet[2660]: E1008 19:43:25.894725 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.894840 kubelet[2660]: W1008 19:43:25.894732 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.894840 kubelet[2660]: E1008 19:43:25.894739 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.895154 kubelet[2660]: E1008 19:43:25.895084 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.895154 kubelet[2660]: W1008 19:43:25.895094 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.895154 kubelet[2660]: E1008 19:43:25.895105 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.895976 kubelet[2660]: E1008 19:43:25.895329 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.895976 kubelet[2660]: W1008 19:43:25.895339 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.895976 kubelet[2660]: E1008 19:43:25.895349 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.895976 kubelet[2660]: E1008 19:43:25.895566 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.895976 kubelet[2660]: W1008 19:43:25.895575 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.895976 kubelet[2660]: E1008 19:43:25.895587 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.895976 kubelet[2660]: E1008 19:43:25.895761 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.895976 kubelet[2660]: W1008 19:43:25.895782 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.895976 kubelet[2660]: E1008 19:43:25.895793 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.897337 kubelet[2660]: E1008 19:43:25.895993 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.897337 kubelet[2660]: W1008 19:43:25.896002 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.897337 kubelet[2660]: E1008 19:43:25.896011 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.897337 kubelet[2660]: E1008 19:43:25.896309 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.897337 kubelet[2660]: W1008 19:43:25.896319 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.897337 kubelet[2660]: E1008 19:43:25.896382 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.897337 kubelet[2660]: E1008 19:43:25.896607 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.897337 kubelet[2660]: W1008 19:43:25.896619 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.897337 kubelet[2660]: E1008 19:43:25.896631 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.897337 kubelet[2660]: E1008 19:43:25.896839 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.897550 kubelet[2660]: W1008 19:43:25.896851 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.897550 kubelet[2660]: E1008 19:43:25.896862 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.897550 kubelet[2660]: E1008 19:43:25.897161 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.897550 kubelet[2660]: W1008 19:43:25.897268 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.897550 kubelet[2660]: E1008 19:43:25.897278 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.898251 kubelet[2660]: E1008 19:43:25.897796 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.898251 kubelet[2660]: W1008 19:43:25.897815 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.898251 kubelet[2660]: E1008 19:43:25.897827 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.899048 kubelet[2660]: E1008 19:43:25.899030 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.899048 kubelet[2660]: W1008 19:43:25.899044 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.899123 kubelet[2660]: E1008 19:43:25.899056 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.899583 kubelet[2660]: E1008 19:43:25.899264 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.899583 kubelet[2660]: W1008 19:43:25.899280 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.899583 kubelet[2660]: E1008 19:43:25.899290 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.927133 kubelet[2660]: E1008 19:43:25.926993 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.927133 kubelet[2660]: W1008 19:43:25.927031 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.927133 kubelet[2660]: E1008 19:43:25.927056 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.928989 kubelet[2660]: E1008 19:43:25.928920 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.928989 kubelet[2660]: W1008 19:43:25.928982 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.929651 kubelet[2660]: E1008 19:43:25.929267 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.929651 kubelet[2660]: W1008 19:43:25.929277 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.929651 kubelet[2660]: E1008 19:43:25.929290 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.929651 kubelet[2660]: E1008 19:43:25.929383 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.929651 kubelet[2660]: E1008 19:43:25.929637 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.929651 kubelet[2660]: W1008 19:43:25.929646 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.929793 kubelet[2660]: E1008 19:43:25.929661 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.930213 kubelet[2660]: E1008 19:43:25.929893 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.930213 kubelet[2660]: W1008 19:43:25.929908 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.930213 kubelet[2660]: E1008 19:43:25.929960 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.930213 kubelet[2660]: E1008 19:43:25.930208 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.930213 kubelet[2660]: W1008 19:43:25.930221 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.930422 kubelet[2660]: E1008 19:43:25.930240 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.930778 kubelet[2660]: E1008 19:43:25.930759 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.930778 kubelet[2660]: W1008 19:43:25.930774 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.931833 kubelet[2660]: E1008 19:43:25.930790 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.932063 kubelet[2660]: E1008 19:43:25.932015 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.932063 kubelet[2660]: W1008 19:43:25.932033 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.932176 kubelet[2660]: E1008 19:43:25.932144 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.932469 kubelet[2660]: E1008 19:43:25.932440 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.932469 kubelet[2660]: W1008 19:43:25.932457 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.932469 kubelet[2660]: E1008 19:43:25.932474 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.932734 kubelet[2660]: E1008 19:43:25.932712 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.932734 kubelet[2660]: W1008 19:43:25.932726 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.932734 kubelet[2660]: E1008 19:43:25.932740 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.932989 kubelet[2660]: E1008 19:43:25.932970 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.932989 kubelet[2660]: W1008 19:43:25.932984 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.933170 kubelet[2660]: E1008 19:43:25.932999 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.934947 kubelet[2660]: E1008 19:43:25.933521 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.934947 kubelet[2660]: W1008 19:43:25.933540 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.934947 kubelet[2660]: E1008 19:43:25.933571 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.934947 kubelet[2660]: E1008 19:43:25.933842 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.934947 kubelet[2660]: W1008 19:43:25.933883 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.934947 kubelet[2660]: E1008 19:43:25.933905 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.934947 kubelet[2660]: E1008 19:43:25.934379 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.934947 kubelet[2660]: W1008 19:43:25.934392 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.934947 kubelet[2660]: E1008 19:43:25.934409 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.935301 kubelet[2660]: E1008 19:43:25.935276 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.935339 kubelet[2660]: W1008 19:43:25.935302 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.935339 kubelet[2660]: E1008 19:43:25.935320 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.935538 kubelet[2660]: E1008 19:43:25.935521 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.935538 kubelet[2660]: W1008 19:43:25.935535 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.935538 kubelet[2660]: E1008 19:43:25.935546 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.936113 kubelet[2660]: E1008 19:43:25.935768 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.936113 kubelet[2660]: W1008 19:43:25.935785 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.936113 kubelet[2660]: E1008 19:43:25.935795 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:25.936627 kubelet[2660]: E1008 19:43:25.936517 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:25.936627 kubelet[2660]: W1008 19:43:25.936538 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:25.936627 kubelet[2660]: E1008 19:43:25.936552 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.716040 kubelet[2660]: E1008 19:43:26.715962 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:26.828474 kubelet[2660]: I1008 19:43:26.828402 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76d85db5f9-kl6k2" podStartSLOduration=2.408868563 podStartE2EDuration="4.828385503s" podCreationTimestamp="2024-10-08 19:43:22 +0000 UTC" firstStartedPulling="2024-10-08 19:43:23.243951736 +0000 UTC m=+13.655881909" lastFinishedPulling="2024-10-08 19:43:25.663468676 +0000 UTC m=+16.075398849" observedRunningTime="2024-10-08 19:43:25.817548748 +0000 UTC m=+16.229478921" watchObservedRunningTime="2024-10-08 19:43:26.828385503 +0000 UTC m=+17.240315756" Oct 8 19:43:26.906316 kubelet[2660]: E1008 19:43:26.905783 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.906890 kubelet[2660]: W1008 19:43:26.906724 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.906890 kubelet[2660]: E1008 19:43:26.906759 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.907724 kubelet[2660]: E1008 19:43:26.907520 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.907724 kubelet[2660]: W1008 19:43:26.907544 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.907724 kubelet[2660]: E1008 19:43:26.907559 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.907884 kubelet[2660]: E1008 19:43:26.907853 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.908119 kubelet[2660]: W1008 19:43:26.908012 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.908205 kubelet[2660]: E1008 19:43:26.908192 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.908771 kubelet[2660]: E1008 19:43:26.908651 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.908771 kubelet[2660]: W1008 19:43:26.908665 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.909130 kubelet[2660]: E1008 19:43:26.908678 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.909529 kubelet[2660]: E1008 19:43:26.909432 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.909756 kubelet[2660]: W1008 19:43:26.909602 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.909756 kubelet[2660]: E1008 19:43:26.909620 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.910303 kubelet[2660]: E1008 19:43:26.910203 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.910655 kubelet[2660]: W1008 19:43:26.910526 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.910655 kubelet[2660]: E1008 19:43:26.910549 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.910800 kubelet[2660]: E1008 19:43:26.910788 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.911299 kubelet[2660]: W1008 19:43:26.911089 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.911299 kubelet[2660]: E1008 19:43:26.911110 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.911948 kubelet[2660]: E1008 19:43:26.911925 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.912248 kubelet[2660]: W1008 19:43:26.912028 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.912248 kubelet[2660]: E1008 19:43:26.912048 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.912655 kubelet[2660]: E1008 19:43:26.912638 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.912823 kubelet[2660]: W1008 19:43:26.912718 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.912823 kubelet[2660]: E1008 19:43:26.912735 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.913197 kubelet[2660]: E1008 19:43:26.913113 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.913632 kubelet[2660]: W1008 19:43:26.913280 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.913731 kubelet[2660]: E1008 19:43:26.913714 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.914129 kubelet[2660]: E1008 19:43:26.914013 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.914129 kubelet[2660]: W1008 19:43:26.914027 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.914129 kubelet[2660]: E1008 19:43:26.914038 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.914392 kubelet[2660]: E1008 19:43:26.914366 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.914462 kubelet[2660]: W1008 19:43:26.914450 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.914526 kubelet[2660]: E1008 19:43:26.914515 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.914845 kubelet[2660]: E1008 19:43:26.914752 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.914845 kubelet[2660]: W1008 19:43:26.914763 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.914845 kubelet[2660]: E1008 19:43:26.914775 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.915049 kubelet[2660]: E1008 19:43:26.915037 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.915107 kubelet[2660]: W1008 19:43:26.915096 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.915167 kubelet[2660]: E1008 19:43:26.915157 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.915470 kubelet[2660]: E1008 19:43:26.915378 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.915470 kubelet[2660]: W1008 19:43:26.915389 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.915470 kubelet[2660]: E1008 19:43:26.915399 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.937180 kubelet[2660]: E1008 19:43:26.937137 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.937180 kubelet[2660]: W1008 19:43:26.937170 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.937639 kubelet[2660]: E1008 19:43:26.937200 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.937639 kubelet[2660]: E1008 19:43:26.937546 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.937639 kubelet[2660]: W1008 19:43:26.937562 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.937639 kubelet[2660]: E1008 19:43:26.937579 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.937845 kubelet[2660]: E1008 19:43:26.937831 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.937897 kubelet[2660]: W1008 19:43:26.937845 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.937897 kubelet[2660]: E1008 19:43:26.937861 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.939112 kubelet[2660]: E1008 19:43:26.938244 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.939112 kubelet[2660]: W1008 19:43:26.938297 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.939112 kubelet[2660]: E1008 19:43:26.938318 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.939112 kubelet[2660]: E1008 19:43:26.938830 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.939112 kubelet[2660]: W1008 19:43:26.938849 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.939112 kubelet[2660]: E1008 19:43:26.938873 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.939580 kubelet[2660]: E1008 19:43:26.939559 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.939816 kubelet[2660]: W1008 19:43:26.939662 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.939816 kubelet[2660]: E1008 19:43:26.939689 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.940554 kubelet[2660]: E1008 19:43:26.940206 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.940554 kubelet[2660]: W1008 19:43:26.940227 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.940554 kubelet[2660]: E1008 19:43:26.940245 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.941417 kubelet[2660]: E1008 19:43:26.941268 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.941417 kubelet[2660]: W1008 19:43:26.941291 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.941660 kubelet[2660]: E1008 19:43:26.941573 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.941850 kubelet[2660]: E1008 19:43:26.941830 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.942005 kubelet[2660]: W1008 19:43:26.941985 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.942246 kubelet[2660]: E1008 19:43:26.942225 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.942661 kubelet[2660]: E1008 19:43:26.942607 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.942661 kubelet[2660]: W1008 19:43:26.942628 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.943066 kubelet[2660]: E1008 19:43:26.943016 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.943409 kubelet[2660]: E1008 19:43:26.943224 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.943409 kubelet[2660]: W1008 19:43:26.943241 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.943409 kubelet[2660]: E1008 19:43:26.943271 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.943682 kubelet[2660]: E1008 19:43:26.943665 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.943905 kubelet[2660]: W1008 19:43:26.943765 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.943905 kubelet[2660]: E1008 19:43:26.943801 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.944583 kubelet[2660]: E1008 19:43:26.944426 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.944583 kubelet[2660]: W1008 19:43:26.944462 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.944583 kubelet[2660]: E1008 19:43:26.944496 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.945005 kubelet[2660]: E1008 19:43:26.944821 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.945005 kubelet[2660]: W1008 19:43:26.944849 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.945005 kubelet[2660]: E1008 19:43:26.944891 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.945342 kubelet[2660]: E1008 19:43:26.945222 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.945342 kubelet[2660]: W1008 19:43:26.945237 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.945342 kubelet[2660]: E1008 19:43:26.945253 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.945909 kubelet[2660]: E1008 19:43:26.945588 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.945909 kubelet[2660]: W1008 19:43:26.945610 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.945909 kubelet[2660]: E1008 19:43:26.945627 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.946259 kubelet[2660]: E1008 19:43:26.946233 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.946383 kubelet[2660]: W1008 19:43:26.946349 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.946674 kubelet[2660]: E1008 19:43:26.946475 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:26.946897 kubelet[2660]: E1008 19:43:26.946821 2660 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:26.946897 kubelet[2660]: W1008 19:43:26.946841 2660 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:26.946897 kubelet[2660]: E1008 19:43:26.946860 2660 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:27.296858 containerd[1473]: time="2024-10-08T19:43:27.296130872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:27.297762 containerd[1473]: time="2024-10-08T19:43:27.297716862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:43:27.299084 containerd[1473]: time="2024-10-08T19:43:27.299020088Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:27.307938 containerd[1473]: time="2024-10-08T19:43:27.307467451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:27.307938 containerd[1473]: time="2024-10-08T19:43:27.307878899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.64271119s" Oct 8 19:43:27.308805 containerd[1473]: time="2024-10-08T19:43:27.307905899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:43:27.315910 containerd[1473]: time="2024-10-08T19:43:27.315865333Z" level=info msg="CreateContainer within sandbox \"2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:43:27.344426 containerd[1473]: time="2024-10-08T19:43:27.344320842Z" level=info msg="CreateContainer within sandbox \"2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8\"" Oct 8 19:43:27.345092 containerd[1473]: time="2024-10-08T19:43:27.344994295Z" level=info msg="StartContainer for \"aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8\"" Oct 8 19:43:27.385125 systemd[1]: Started cri-containerd-aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8.scope - libcontainer container aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8. Oct 8 19:43:27.421490 containerd[1473]: time="2024-10-08T19:43:27.421368128Z" level=info msg="StartContainer for \"aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8\" returns successfully" Oct 8 19:43:27.462432 systemd[1]: cri-containerd-aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8.scope: Deactivated successfully. Oct 8 19:43:27.498400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8-rootfs.mount: Deactivated successfully. Oct 8 19:43:27.648904 containerd[1473]: time="2024-10-08T19:43:27.648740435Z" level=info msg="shim disconnected" id=aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8 namespace=k8s.io Oct 8 19:43:27.648904 containerd[1473]: time="2024-10-08T19:43:27.648807836Z" level=warning msg="cleaning up after shim disconnected" id=aefddeb490420126dbe87d1ff2a4b3c398593bb4a3c0d6b4db2a57c60f8e62a8 namespace=k8s.io Oct 8 19:43:27.648904 containerd[1473]: time="2024-10-08T19:43:27.648823197Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:43:27.815263 containerd[1473]: time="2024-10-08T19:43:27.814098145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:43:28.716195 kubelet[2660]: E1008 19:43:28.716079 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:30.715454 kubelet[2660]: E1008 19:43:30.715330 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:32.282058 containerd[1473]: time="2024-10-08T19:43:32.281992269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:32.282951 containerd[1473]: time="2024-10-08T19:43:32.282910809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:43:32.284297 containerd[1473]: time="2024-10-08T19:43:32.284264717Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:32.287454 containerd[1473]: time="2024-10-08T19:43:32.287416342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:32.289214 containerd[1473]: time="2024-10-08T19:43:32.289157138Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.475001672s" Oct 8 19:43:32.289214 containerd[1473]: time="2024-10-08T19:43:32.289206259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:43:32.293175 containerd[1473]: time="2024-10-08T19:43:32.293139021Z" level=info msg="CreateContainer within sandbox \"2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:43:32.310874 containerd[1473]: time="2024-10-08T19:43:32.310772907Z" level=info msg="CreateContainer within sandbox \"2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320\"" Oct 8 19:43:32.313984 containerd[1473]: time="2024-10-08T19:43:32.312187496Z" level=info msg="StartContainer for \"71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320\"" Oct 8 19:43:32.345090 systemd[1]: Started cri-containerd-71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320.scope - libcontainer container 71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320. Oct 8 19:43:32.381725 containerd[1473]: time="2024-10-08T19:43:32.381629938Z" level=info msg="StartContainer for \"71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320\" returns successfully" Oct 8 19:43:32.715272 kubelet[2660]: E1008 19:43:32.715215 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:32.975966 containerd[1473]: time="2024-10-08T19:43:32.975769914Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:43:32.980672 systemd[1]: cri-containerd-71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320.scope: Deactivated successfully. Oct 8 19:43:32.995127 kubelet[2660]: I1008 19:43:32.995090 2660 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 19:43:33.029299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320-rootfs.mount: Deactivated successfully. Oct 8 19:43:33.053210 systemd[1]: Created slice kubepods-burstable-poda9aae756_4b47_473b_9e46_2501e5f0f460.slice - libcontainer container kubepods-burstable-poda9aae756_4b47_473b_9e46_2501e5f0f460.slice. Oct 8 19:43:33.068252 systemd[1]: Created slice kubepods-besteffort-pod554c64d3_a559_43d2_9d88_70455c3e4442.slice - libcontainer container kubepods-besteffort-pod554c64d3_a559_43d2_9d88_70455c3e4442.slice. Oct 8 19:43:33.079672 kubelet[2660]: I1008 19:43:33.079645 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a9aae756-4b47-473b-9e46-2501e5f0f460-config-volume\") pod \"coredns-6f6b679f8f-pj827\" (UID: \"a9aae756-4b47-473b-9e46-2501e5f0f460\") " pod="kube-system/coredns-6f6b679f8f-pj827" Oct 8 19:43:33.080006 kubelet[2660]: I1008 19:43:33.079871 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ce72720-be2a-4924-bded-181fc38d374d-config-volume\") pod \"coredns-6f6b679f8f-drcv7\" (UID: \"0ce72720-be2a-4924-bded-181fc38d374d\") " pod="kube-system/coredns-6f6b679f8f-drcv7" Oct 8 19:43:33.080006 kubelet[2660]: I1008 19:43:33.079899 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthpc\" (UniqueName: \"kubernetes.io/projected/a9aae756-4b47-473b-9e46-2501e5f0f460-kube-api-access-bthpc\") pod \"coredns-6f6b679f8f-pj827\" (UID: \"a9aae756-4b47-473b-9e46-2501e5f0f460\") " pod="kube-system/coredns-6f6b679f8f-pj827" Oct 8 19:43:33.080357 systemd[1]: Created slice kubepods-burstable-pod0ce72720_be2a_4924_bded_181fc38d374d.slice - libcontainer container kubepods-burstable-pod0ce72720_be2a_4924_bded_181fc38d374d.slice. Oct 8 19:43:33.081088 kubelet[2660]: I1008 19:43:33.080511 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh88\" (UniqueName: \"kubernetes.io/projected/554c64d3-a559-43d2-9d88-70455c3e4442-kube-api-access-xlh88\") pod \"calico-kube-controllers-7569f59856-kdnqq\" (UID: \"554c64d3-a559-43d2-9d88-70455c3e4442\") " pod="calico-system/calico-kube-controllers-7569f59856-kdnqq" Oct 8 19:43:33.081088 kubelet[2660]: I1008 19:43:33.080545 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wr9w\" (UniqueName: \"kubernetes.io/projected/0ce72720-be2a-4924-bded-181fc38d374d-kube-api-access-4wr9w\") pod \"coredns-6f6b679f8f-drcv7\" (UID: \"0ce72720-be2a-4924-bded-181fc38d374d\") " pod="kube-system/coredns-6f6b679f8f-drcv7" Oct 8 19:43:33.081470 kubelet[2660]: I1008 19:43:33.081221 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/554c64d3-a559-43d2-9d88-70455c3e4442-tigera-ca-bundle\") pod \"calico-kube-controllers-7569f59856-kdnqq\" (UID: \"554c64d3-a559-43d2-9d88-70455c3e4442\") " pod="calico-system/calico-kube-controllers-7569f59856-kdnqq" Oct 8 19:43:33.136906 containerd[1473]: time="2024-10-08T19:43:33.136803814Z" level=info msg="shim disconnected" id=71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320 namespace=k8s.io Oct 8 19:43:33.136906 containerd[1473]: time="2024-10-08T19:43:33.136899856Z" level=warning msg="cleaning up after shim disconnected" id=71f2904659ece0f0eb75e6f063b269411e1c685c7d36e39259dda50cee148320 namespace=k8s.io Oct 8 19:43:33.137135 containerd[1473]: time="2024-10-08T19:43:33.136957497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:43:33.376991 containerd[1473]: time="2024-10-08T19:43:33.376325971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pj827,Uid:a9aae756-4b47-473b-9e46-2501e5f0f460,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:33.379040 containerd[1473]: time="2024-10-08T19:43:33.377539996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7569f59856-kdnqq,Uid:554c64d3-a559-43d2-9d88-70455c3e4442,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:33.390467 containerd[1473]: time="2024-10-08T19:43:33.390226543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drcv7,Uid:0ce72720-be2a-4924-bded-181fc38d374d,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:33.585332 containerd[1473]: time="2024-10-08T19:43:33.585093561Z" level=error msg="Failed to destroy network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.585829 containerd[1473]: time="2024-10-08T19:43:33.585697093Z" level=error msg="encountered an error cleaning up failed sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.585882 containerd[1473]: time="2024-10-08T19:43:33.585789575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7569f59856-kdnqq,Uid:554c64d3-a559-43d2-9d88-70455c3e4442,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.586144 kubelet[2660]: E1008 19:43:33.586103 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.586223 kubelet[2660]: E1008 19:43:33.586183 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7569f59856-kdnqq" Oct 8 19:43:33.586223 kubelet[2660]: E1008 19:43:33.586203 2660 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7569f59856-kdnqq" Oct 8 19:43:33.586393 kubelet[2660]: E1008 19:43:33.586257 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7569f59856-kdnqq_calico-system(554c64d3-a559-43d2-9d88-70455c3e4442)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7569f59856-kdnqq_calico-system(554c64d3-a559-43d2-9d88-70455c3e4442)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7569f59856-kdnqq" podUID="554c64d3-a559-43d2-9d88-70455c3e4442" Oct 8 19:43:33.598303 containerd[1473]: time="2024-10-08T19:43:33.598175916Z" level=error msg="Failed to destroy network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.598781 containerd[1473]: time="2024-10-08T19:43:33.598631165Z" level=error msg="encountered an error cleaning up failed sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.598781 containerd[1473]: time="2024-10-08T19:43:33.598689087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drcv7,Uid:0ce72720-be2a-4924-bded-181fc38d374d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.598985 kubelet[2660]: E1008 19:43:33.598941 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.599041 kubelet[2660]: E1008 19:43:33.599003 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-drcv7" Oct 8 19:43:33.599041 kubelet[2660]: E1008 19:43:33.599022 2660 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-drcv7" Oct 8 19:43:33.599192 kubelet[2660]: E1008 19:43:33.599060 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-drcv7_kube-system(0ce72720-be2a-4924-bded-181fc38d374d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-drcv7_kube-system(0ce72720-be2a-4924-bded-181fc38d374d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-drcv7" podUID="0ce72720-be2a-4924-bded-181fc38d374d" Oct 8 19:43:33.599878 containerd[1473]: time="2024-10-08T19:43:33.599819710Z" level=error msg="Failed to destroy network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.600960 containerd[1473]: time="2024-10-08T19:43:33.600246879Z" level=error msg="encountered an error cleaning up failed sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.600960 containerd[1473]: time="2024-10-08T19:43:33.600857212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pj827,Uid:a9aae756-4b47-473b-9e46-2501e5f0f460,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.601191 kubelet[2660]: E1008 19:43:33.601157 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.601297 kubelet[2660]: E1008 19:43:33.601209 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pj827" Oct 8 19:43:33.601297 kubelet[2660]: E1008 19:43:33.601229 2660 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-pj827" Oct 8 19:43:33.601297 kubelet[2660]: E1008 19:43:33.601260 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-pj827_kube-system(a9aae756-4b47-473b-9e46-2501e5f0f460)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-pj827_kube-system(a9aae756-4b47-473b-9e46-2501e5f0f460)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pj827" podUID="a9aae756-4b47-473b-9e46-2501e5f0f460" Oct 8 19:43:33.828962 kubelet[2660]: I1008 19:43:33.828893 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:43:33.830689 containerd[1473]: time="2024-10-08T19:43:33.830327918Z" level=info msg="StopPodSandbox for \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\"" Oct 8 19:43:33.831648 containerd[1473]: time="2024-10-08T19:43:33.831318219Z" level=info msg="Ensure that sandbox 4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48 in task-service has been cleanup successfully" Oct 8 19:43:33.833214 kubelet[2660]: I1008 19:43:33.832828 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:43:33.838721 containerd[1473]: time="2024-10-08T19:43:33.838680094Z" level=info msg="StopPodSandbox for \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\"" Oct 8 19:43:33.839937 containerd[1473]: time="2024-10-08T19:43:33.839669754Z" level=info msg="Ensure that sandbox b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300 in task-service has been cleanup successfully" Oct 8 19:43:33.849661 kubelet[2660]: I1008 19:43:33.849396 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:43:33.850278 containerd[1473]: time="2024-10-08T19:43:33.849485081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:43:33.855493 containerd[1473]: time="2024-10-08T19:43:33.854658150Z" level=info msg="StopPodSandbox for \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\"" Oct 8 19:43:33.856633 containerd[1473]: time="2024-10-08T19:43:33.856313984Z" level=info msg="Ensure that sandbox 95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc in task-service has been cleanup successfully" Oct 8 19:43:33.901850 containerd[1473]: time="2024-10-08T19:43:33.901670498Z" level=error msg="StopPodSandbox for \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\" failed" error="failed to destroy network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.902263 kubelet[2660]: E1008 19:43:33.902097 2660 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:43:33.902469 kubelet[2660]: E1008 19:43:33.902376 2660 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300"} Oct 8 19:43:33.902785 kubelet[2660]: E1008 19:43:33.902588 2660 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a9aae756-4b47-473b-9e46-2501e5f0f460\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:33.902785 kubelet[2660]: E1008 19:43:33.902616 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a9aae756-4b47-473b-9e46-2501e5f0f460\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-pj827" podUID="a9aae756-4b47-473b-9e46-2501e5f0f460" Oct 8 19:43:33.907952 containerd[1473]: time="2024-10-08T19:43:33.907878549Z" level=error msg="StopPodSandbox for \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\" failed" error="failed to destroy network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.908334 kubelet[2660]: E1008 19:43:33.908296 2660 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:43:33.908388 kubelet[2660]: E1008 19:43:33.908348 2660 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48"} Oct 8 19:43:33.908388 kubelet[2660]: E1008 19:43:33.908380 2660 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ce72720-be2a-4924-bded-181fc38d374d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:33.908508 kubelet[2660]: E1008 19:43:33.908442 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ce72720-be2a-4924-bded-181fc38d374d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-drcv7" podUID="0ce72720-be2a-4924-bded-181fc38d374d" Oct 8 19:43:33.914780 containerd[1473]: time="2024-10-08T19:43:33.914718853Z" level=error msg="StopPodSandbox for \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\" failed" error="failed to destroy network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:33.915007 kubelet[2660]: E1008 19:43:33.914964 2660 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:43:33.915092 kubelet[2660]: E1008 19:43:33.915046 2660 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc"} Oct 8 19:43:33.915124 kubelet[2660]: E1008 19:43:33.915093 2660 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"554c64d3-a559-43d2-9d88-70455c3e4442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:33.915192 kubelet[2660]: E1008 19:43:33.915115 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"554c64d3-a559-43d2-9d88-70455c3e4442\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7569f59856-kdnqq" podUID="554c64d3-a559-43d2-9d88-70455c3e4442" Oct 8 19:43:34.309543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48-shm.mount: Deactivated successfully. Oct 8 19:43:34.309677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300-shm.mount: Deactivated successfully. Oct 8 19:43:34.309759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc-shm.mount: Deactivated successfully. Oct 8 19:43:34.724827 systemd[1]: Created slice kubepods-besteffort-pod36d90f88_3457_4191_84d5_72e6469f1596.slice - libcontainer container kubepods-besteffort-pod36d90f88_3457_4191_84d5_72e6469f1596.slice. Oct 8 19:43:34.727834 containerd[1473]: time="2024-10-08T19:43:34.727763138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qp66,Uid:36d90f88-3457-4191-84d5-72e6469f1596,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:34.800035 containerd[1473]: time="2024-10-08T19:43:34.799849193Z" level=error msg="Failed to destroy network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:34.802974 containerd[1473]: time="2024-10-08T19:43:34.800582088Z" level=error msg="encountered an error cleaning up failed sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:34.802974 containerd[1473]: time="2024-10-08T19:43:34.800673330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qp66,Uid:36d90f88-3457-4191-84d5-72e6469f1596,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:34.802101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8-shm.mount: Deactivated successfully. Oct 8 19:43:34.803618 kubelet[2660]: E1008 19:43:34.803425 2660 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:34.803618 kubelet[2660]: E1008 19:43:34.803496 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4qp66" Oct 8 19:43:34.803618 kubelet[2660]: E1008 19:43:34.803517 2660 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4qp66" Oct 8 19:43:34.803782 kubelet[2660]: E1008 19:43:34.803583 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4qp66_calico-system(36d90f88-3457-4191-84d5-72e6469f1596)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4qp66_calico-system(36d90f88-3457-4191-84d5-72e6469f1596)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:34.853111 kubelet[2660]: I1008 19:43:34.853066 2660 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:43:34.855275 containerd[1473]: time="2024-10-08T19:43:34.855150730Z" level=info msg="StopPodSandbox for \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\"" Oct 8 19:43:34.855524 containerd[1473]: time="2024-10-08T19:43:34.855476537Z" level=info msg="Ensure that sandbox 815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8 in task-service has been cleanup successfully" Oct 8 19:43:34.885152 containerd[1473]: time="2024-10-08T19:43:34.885078487Z" level=error msg="StopPodSandbox for \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\" failed" error="failed to destroy network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:34.885483 kubelet[2660]: E1008 19:43:34.885377 2660 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:43:34.885542 kubelet[2660]: E1008 19:43:34.885486 2660 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8"} Oct 8 19:43:34.885595 kubelet[2660]: E1008 19:43:34.885536 2660 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36d90f88-3457-4191-84d5-72e6469f1596\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:34.885595 kubelet[2660]: E1008 19:43:34.885562 2660 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36d90f88-3457-4191-84d5-72e6469f1596\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4qp66" podUID="36d90f88-3457-4191-84d5-72e6469f1596" Oct 8 19:43:44.578215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2065879374.mount: Deactivated successfully. Oct 8 19:43:44.617273 containerd[1473]: time="2024-10-08T19:43:44.617131024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:43:44.618960 containerd[1473]: time="2024-10-08T19:43:44.618064286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:44.619716 containerd[1473]: time="2024-10-08T19:43:44.619671563Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:44.622416 containerd[1473]: time="2024-10-08T19:43:44.622377827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:44.623282 containerd[1473]: time="2024-10-08T19:43:44.623214006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 10.773681845s" Oct 8 19:43:44.623282 containerd[1473]: time="2024-10-08T19:43:44.623276968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:43:44.640445 containerd[1473]: time="2024-10-08T19:43:44.640403970Z" level=info msg="CreateContainer within sandbox \"2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:43:44.657282 containerd[1473]: time="2024-10-08T19:43:44.657235205Z" level=info msg="CreateContainer within sandbox \"2190afd2b4eb4e22e155d70b4a907bfefc4e1ab2ba367762dd681a3460a58eed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4c54c6d0868123d779e97b6cb86c33084e6e6120074861fce088f34c44866cd1\"" Oct 8 19:43:44.659122 containerd[1473]: time="2024-10-08T19:43:44.659087728Z" level=info msg="StartContainer for \"4c54c6d0868123d779e97b6cb86c33084e6e6120074861fce088f34c44866cd1\"" Oct 8 19:43:44.691146 systemd[1]: Started cri-containerd-4c54c6d0868123d779e97b6cb86c33084e6e6120074861fce088f34c44866cd1.scope - libcontainer container 4c54c6d0868123d779e97b6cb86c33084e6e6120074861fce088f34c44866cd1. Oct 8 19:43:44.726435 containerd[1473]: time="2024-10-08T19:43:44.726388668Z" level=info msg="StartContainer for \"4c54c6d0868123d779e97b6cb86c33084e6e6120074861fce088f34c44866cd1\" returns successfully" Oct 8 19:43:44.892489 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:43:44.893028 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:43:45.718319 containerd[1473]: time="2024-10-08T19:43:45.717770944Z" level=info msg="StopPodSandbox for \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\"" Oct 8 19:43:45.718319 containerd[1473]: time="2024-10-08T19:43:45.717859866Z" level=info msg="StopPodSandbox for \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\"" Oct 8 19:43:45.807304 kubelet[2660]: I1008 19:43:45.806697 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cdlfc" podStartSLOduration=2.492083364 podStartE2EDuration="23.806497242s" podCreationTimestamp="2024-10-08 19:43:22 +0000 UTC" firstStartedPulling="2024-10-08 19:43:23.310714373 +0000 UTC m=+13.722644506" lastFinishedPulling="2024-10-08 19:43:44.625128251 +0000 UTC m=+35.037058384" observedRunningTime="2024-10-08 19:43:44.907221271 +0000 UTC m=+35.319151484" watchObservedRunningTime="2024-10-08 19:43:45.806497242 +0000 UTC m=+36.218427455" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.806 [INFO][3737] k8s.go 608: Cleaning up netns ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.806 [INFO][3737] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" iface="eth0" netns="/var/run/netns/cni-4bd9ab5d-d26c-ed4b-a31d-43d3bbb2d037" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.806 [INFO][3737] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" iface="eth0" netns="/var/run/netns/cni-4bd9ab5d-d26c-ed4b-a31d-43d3bbb2d037" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.806 [INFO][3737] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" iface="eth0" netns="/var/run/netns/cni-4bd9ab5d-d26c-ed4b-a31d-43d3bbb2d037" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.806 [INFO][3737] k8s.go 615: Releasing IP address(es) ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.806 [INFO][3737] utils.go 188: Calico CNI releasing IP address ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.879 [INFO][3750] ipam_plugin.go 417: Releasing address using handleID ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.879 [INFO][3750] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.879 [INFO][3750] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.891 [WARNING][3750] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.892 [INFO][3750] ipam_plugin.go 445: Releasing address using workloadID ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.895 [INFO][3750] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:45.899116 containerd[1473]: 2024-10-08 19:43:45.896 [INFO][3737] k8s.go 621: Teardown processing complete. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:43:45.901819 systemd[1]: run-netns-cni\x2d4bd9ab5d\x2dd26c\x2ded4b\x2da31d\x2d43d3bbb2d037.mount: Deactivated successfully. Oct 8 19:43:45.904167 containerd[1473]: time="2024-10-08T19:43:45.904017989Z" level=info msg="TearDown network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\" successfully" Oct 8 19:43:45.904394 containerd[1473]: time="2024-10-08T19:43:45.904230474Z" level=info msg="StopPodSandbox for \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\" returns successfully" Oct 8 19:43:45.906812 containerd[1473]: time="2024-10-08T19:43:45.905861952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qp66,Uid:36d90f88-3457-4191-84d5-72e6469f1596,Namespace:calico-system,Attempt:1,}" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.812 [INFO][3738] k8s.go 608: Cleaning up netns ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.813 [INFO][3738] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" iface="eth0" netns="/var/run/netns/cni-344d64fe-00fd-6b8e-cbd4-b6b71dc02fd3" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.813 [INFO][3738] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" iface="eth0" netns="/var/run/netns/cni-344d64fe-00fd-6b8e-cbd4-b6b71dc02fd3" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.813 [INFO][3738] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" iface="eth0" netns="/var/run/netns/cni-344d64fe-00fd-6b8e-cbd4-b6b71dc02fd3" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.813 [INFO][3738] k8s.go 615: Releasing IP address(es) ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.813 [INFO][3738] utils.go 188: Calico CNI releasing IP address ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.879 [INFO][3752] ipam_plugin.go 417: Releasing address using handleID ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.879 [INFO][3752] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.895 [INFO][3752] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.913 [WARNING][3752] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.913 [INFO][3752] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.915 [INFO][3752] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:45.920422 containerd[1473]: 2024-10-08 19:43:45.917 [INFO][3738] k8s.go 621: Teardown processing complete. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:43:45.921403 containerd[1473]: time="2024-10-08T19:43:45.921373599Z" level=info msg="TearDown network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\" successfully" Oct 8 19:43:45.921514 containerd[1473]: time="2024-10-08T19:43:45.921497682Z" level=info msg="StopPodSandbox for \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\" returns successfully" Oct 8 19:43:45.924629 containerd[1473]: time="2024-10-08T19:43:45.922510946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drcv7,Uid:0ce72720-be2a-4924-bded-181fc38d374d,Namespace:kube-system,Attempt:1,}" Oct 8 19:43:45.926361 systemd[1]: run-netns-cni\x2d344d64fe\x2d00fd\x2d6b8e\x2dcbd4\x2db6b71dc02fd3.mount: Deactivated successfully. Oct 8 19:43:46.119064 systemd-networkd[1366]: caliabe63571239: Link UP Oct 8 19:43:46.121289 systemd-networkd[1366]: caliabe63571239: Gained carrier Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:45.988 [INFO][3783] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.014 [INFO][3783] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0 csi-node-driver- calico-system 36d90f88-3457-4191-84d5-72e6469f1596 666 0 2024-10-08 19:43:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975-2-2-0-004c89fa14 csi-node-driver-4qp66 eth0 default [] [] [kns.calico-system ksa.calico-system.default] caliabe63571239 [] []}} ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.014 [INFO][3783] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.058 [INFO][3808] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" HandleID="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.074 [INFO][3808] ipam_plugin.go 270: Auto assigning IP ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" HandleID="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004f0e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-0-004c89fa14", "pod":"csi-node-driver-4qp66", "timestamp":"2024-10-08 19:43:46.058373129 +0000 UTC"}, Hostname:"ci-3975-2-2-0-004c89fa14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.074 [INFO][3808] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.074 [INFO][3808] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.074 [INFO][3808] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-0-004c89fa14' Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.077 [INFO][3808] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.084 [INFO][3808] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.090 [INFO][3808] ipam.go 489: Trying affinity for 192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.092 [INFO][3808] ipam.go 155: Attempting to load block cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.095 [INFO][3808] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.095 [INFO][3808] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.097 [INFO][3808] ipam.go 1685: Creating new handle: k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88 Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.102 [INFO][3808] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.109 [INFO][3808] ipam.go 1216: Successfully claimed IPs: [192.168.23.129/26] block=192.168.23.128/26 handle="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.109 [INFO][3808] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.129/26] handle="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.109 [INFO][3808] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:46.142542 containerd[1473]: 2024-10-08 19:43:46.109 [INFO][3808] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.129/26] IPv6=[] ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" HandleID="k8s-pod-network.d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:46.143635 containerd[1473]: 2024-10-08 19:43:46.111 [INFO][3783] k8s.go 386: Populated endpoint ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36d90f88-3457-4191-84d5-72e6469f1596", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"", Pod:"csi-node-driver-4qp66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliabe63571239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:46.143635 containerd[1473]: 2024-10-08 19:43:46.111 [INFO][3783] k8s.go 387: Calico CNI using IPs: [192.168.23.129/32] ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:46.143635 containerd[1473]: 2024-10-08 19:43:46.111 [INFO][3783] dataplane_linux.go 68: Setting the host side veth name to caliabe63571239 ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:46.143635 containerd[1473]: 2024-10-08 19:43:46.120 [INFO][3783] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:46.143635 containerd[1473]: 2024-10-08 19:43:46.121 [INFO][3783] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36d90f88-3457-4191-84d5-72e6469f1596", ResourceVersion:"666", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88", Pod:"csi-node-driver-4qp66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliabe63571239", MAC:"b2:25:3b:7b:b3:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:46.143635 containerd[1473]: 2024-10-08 19:43:46.136 [INFO][3783] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88" Namespace="calico-system" Pod="csi-node-driver-4qp66" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:43:46.160764 containerd[1473]: time="2024-10-08T19:43:46.160634686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:46.160764 containerd[1473]: time="2024-10-08T19:43:46.160699287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:46.160764 containerd[1473]: time="2024-10-08T19:43:46.160718488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:46.160764 containerd[1473]: time="2024-10-08T19:43:46.160732368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:46.178684 systemd[1]: Started cri-containerd-d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88.scope - libcontainer container d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88. Oct 8 19:43:46.214770 containerd[1473]: time="2024-10-08T19:43:46.212463401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4qp66,Uid:36d90f88-3457-4191-84d5-72e6469f1596,Namespace:calico-system,Attempt:1,} returns sandbox id \"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88\"" Oct 8 19:43:46.217151 containerd[1473]: time="2024-10-08T19:43:46.216597379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:43:46.225584 systemd-networkd[1366]: cali5248bcce10b: Link UP Oct 8 19:43:46.226249 systemd-networkd[1366]: cali5248bcce10b: Gained carrier Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.004 [INFO][3793] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.028 [INFO][3793] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0 coredns-6f6b679f8f- kube-system 0ce72720-be2a-4924-bded-181fc38d374d 667 0 2024-10-08 19:43:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-0-004c89fa14 coredns-6f6b679f8f-drcv7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5248bcce10b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.028 [INFO][3793] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.066 [INFO][3813] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" HandleID="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.081 [INFO][3813] ipam_plugin.go 270: Auto assigning IP ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" HandleID="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058f6e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-0-004c89fa14", "pod":"coredns-6f6b679f8f-drcv7", "timestamp":"2024-10-08 19:43:46.06679721 +0000 UTC"}, Hostname:"ci-3975-2-2-0-004c89fa14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.082 [INFO][3813] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.109 [INFO][3813] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.110 [INFO][3813] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-0-004c89fa14' Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.178 [INFO][3813] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.185 [INFO][3813] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.195 [INFO][3813] ipam.go 489: Trying affinity for 192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.198 [INFO][3813] ipam.go 155: Attempting to load block cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.203 [INFO][3813] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.203 [INFO][3813] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.205 [INFO][3813] ipam.go 1685: Creating new handle: k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.210 [INFO][3813] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.219 [INFO][3813] ipam.go 1216: Successfully claimed IPs: [192.168.23.130/26] block=192.168.23.128/26 handle="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.219 [INFO][3813] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.130/26] handle="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.219 [INFO][3813] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:46.240792 containerd[1473]: 2024-10-08 19:43:46.219 [INFO][3813] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.130/26] IPv6=[] ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" HandleID="k8s-pod-network.4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:46.241900 containerd[1473]: 2024-10-08 19:43:46.222 [INFO][3793] k8s.go 386: Populated endpoint ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ce72720-be2a-4924-bded-181fc38d374d", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"", Pod:"coredns-6f6b679f8f-drcv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5248bcce10b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:46.241900 containerd[1473]: 2024-10-08 19:43:46.222 [INFO][3793] k8s.go 387: Calico CNI using IPs: [192.168.23.130/32] ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:46.241900 containerd[1473]: 2024-10-08 19:43:46.222 [INFO][3793] dataplane_linux.go 68: Setting the host side veth name to cali5248bcce10b ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:46.241900 containerd[1473]: 2024-10-08 19:43:46.226 [INFO][3793] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:46.241900 containerd[1473]: 2024-10-08 19:43:46.227 [INFO][3793] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ce72720-be2a-4924-bded-181fc38d374d", ResourceVersion:"667", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e", Pod:"coredns-6f6b679f8f-drcv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5248bcce10b", MAC:"c2:c6:f9:3b:3e:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:46.241900 containerd[1473]: 2024-10-08 19:43:46.237 [INFO][3793] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e" Namespace="kube-system" Pod="coredns-6f6b679f8f-drcv7" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:43:46.270027 containerd[1473]: time="2024-10-08T19:43:46.267696717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:46.270027 containerd[1473]: time="2024-10-08T19:43:46.267768038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:46.270027 containerd[1473]: time="2024-10-08T19:43:46.267788959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:46.270027 containerd[1473]: time="2024-10-08T19:43:46.267799159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:46.284098 systemd[1]: Started cri-containerd-4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e.scope - libcontainer container 4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e. Oct 8 19:43:46.323174 containerd[1473]: time="2024-10-08T19:43:46.323028115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-drcv7,Uid:0ce72720-be2a-4924-bded-181fc38d374d,Namespace:kube-system,Attempt:1,} returns sandbox id \"4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e\"" Oct 8 19:43:46.328593 containerd[1473]: time="2024-10-08T19:43:46.328492525Z" level=info msg="CreateContainer within sandbox \"4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:43:46.409853 containerd[1473]: time="2024-10-08T19:43:46.409719421Z" level=info msg="CreateContainer within sandbox \"4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ad33962529be560eeb24380341b7bd50becada24501c17c9b20c2042918d7459\"" Oct 8 19:43:46.412702 containerd[1473]: time="2024-10-08T19:43:46.411841511Z" level=info msg="StartContainer for \"ad33962529be560eeb24380341b7bd50becada24501c17c9b20c2042918d7459\"" Oct 8 19:43:46.451527 systemd[1]: Started cri-containerd-ad33962529be560eeb24380341b7bd50becada24501c17c9b20c2042918d7459.scope - libcontainer container ad33962529be560eeb24380341b7bd50becada24501c17c9b20c2042918d7459. Oct 8 19:43:46.488585 containerd[1473]: time="2024-10-08T19:43:46.488511298Z" level=info msg="StartContainer for \"ad33962529be560eeb24380341b7bd50becada24501c17c9b20c2042918d7459\" returns successfully" Oct 8 19:43:46.772956 kernel: bpftool[4083]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:43:46.906568 kubelet[2660]: I1008 19:43:46.906359 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-drcv7" podStartSLOduration=30.906326573 podStartE2EDuration="30.906326573s" podCreationTimestamp="2024-10-08 19:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:46.906025606 +0000 UTC m=+37.317955899" watchObservedRunningTime="2024-10-08 19:43:46.906326573 +0000 UTC m=+37.318256746" Oct 8 19:43:47.051823 systemd-networkd[1366]: vxlan.calico: Link UP Oct 8 19:43:47.051836 systemd-networkd[1366]: vxlan.calico: Gained carrier Oct 8 19:43:47.291283 systemd-networkd[1366]: cali5248bcce10b: Gained IPv6LL Oct 8 19:43:47.726259 containerd[1473]: time="2024-10-08T19:43:47.726203072Z" level=info msg="StopPodSandbox for \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\"" Oct 8 19:43:47.743494 containerd[1473]: time="2024-10-08T19:43:47.743431766Z" level=info msg="StopPodSandbox for \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\"" Oct 8 19:43:47.867682 systemd-networkd[1366]: caliabe63571239: Gained IPv6LL Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.841 [INFO][4213] k8s.go 608: Cleaning up netns ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.841 [INFO][4213] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" iface="eth0" netns="/var/run/netns/cni-138df166-7c42-0f30-dbf4-4f65dfcfb7f8" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.841 [INFO][4213] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" iface="eth0" netns="/var/run/netns/cni-138df166-7c42-0f30-dbf4-4f65dfcfb7f8" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.842 [INFO][4213] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" iface="eth0" netns="/var/run/netns/cni-138df166-7c42-0f30-dbf4-4f65dfcfb7f8" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.842 [INFO][4213] k8s.go 615: Releasing IP address(es) ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.842 [INFO][4213] utils.go 188: Calico CNI releasing IP address ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.901 [INFO][4225] ipam_plugin.go 417: Releasing address using handleID ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.901 [INFO][4225] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.901 [INFO][4225] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.916 [WARNING][4225] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.916 [INFO][4225] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.920 [INFO][4225] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:47.927516 containerd[1473]: 2024-10-08 19:43:47.925 [INFO][4213] k8s.go 621: Teardown processing complete. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:43:47.929280 containerd[1473]: time="2024-10-08T19:43:47.927982794Z" level=info msg="TearDown network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\" successfully" Oct 8 19:43:47.929280 containerd[1473]: time="2024-10-08T19:43:47.928990579Z" level=info msg="StopPodSandbox for \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\" returns successfully" Oct 8 19:43:47.931895 containerd[1473]: time="2024-10-08T19:43:47.931691523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pj827,Uid:a9aae756-4b47-473b-9e46-2501e5f0f460,Namespace:kube-system,Attempt:1,}" Oct 8 19:43:47.932833 systemd[1]: run-netns-cni\x2d138df166\x2d7c42\x2d0f30\x2ddbf4\x2d4f65dfcfb7f8.mount: Deactivated successfully. Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.850 [INFO][4214] k8s.go 608: Cleaning up netns ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.852 [INFO][4214] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" iface="eth0" netns="/var/run/netns/cni-eacc939f-d388-2007-9efa-fc71fa850f85" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.852 [INFO][4214] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" iface="eth0" netns="/var/run/netns/cni-eacc939f-d388-2007-9efa-fc71fa850f85" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.853 [INFO][4214] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" iface="eth0" netns="/var/run/netns/cni-eacc939f-d388-2007-9efa-fc71fa850f85" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.853 [INFO][4214] k8s.go 615: Releasing IP address(es) ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.853 [INFO][4214] utils.go 188: Calico CNI releasing IP address ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.909 [INFO][4229] ipam_plugin.go 417: Releasing address using handleID ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.909 [INFO][4229] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.921 [INFO][4229] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.936 [WARNING][4229] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.936 [INFO][4229] ipam_plugin.go 445: Releasing address using workloadID ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.938 [INFO][4229] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:47.942348 containerd[1473]: 2024-10-08 19:43:47.940 [INFO][4214] k8s.go 621: Teardown processing complete. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:43:47.945061 containerd[1473]: time="2024-10-08T19:43:47.944813678Z" level=info msg="TearDown network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\" successfully" Oct 8 19:43:47.945061 containerd[1473]: time="2024-10-08T19:43:47.944853439Z" level=info msg="StopPodSandbox for \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\" returns successfully" Oct 8 19:43:47.945259 systemd[1]: run-netns-cni\x2deacc939f\x2dd388\x2d2007\x2d9efa\x2dfc71fa850f85.mount: Deactivated successfully. Oct 8 19:43:47.947556 containerd[1473]: time="2024-10-08T19:43:47.947493623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.948692 containerd[1473]: time="2024-10-08T19:43:47.948647930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7569f59856-kdnqq,Uid:554c64d3-a559-43d2-9d88-70455c3e4442,Namespace:calico-system,Attempt:1,}" Oct 8 19:43:47.951318 containerd[1473]: time="2024-10-08T19:43:47.951271353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:43:47.955951 containerd[1473]: time="2024-10-08T19:43:47.955880624Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.966096 containerd[1473]: time="2024-10-08T19:43:47.962800550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.966096 containerd[1473]: time="2024-10-08T19:43:47.963672731Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.747034711s" Oct 8 19:43:47.966096 containerd[1473]: time="2024-10-08T19:43:47.963706492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:43:47.987729 containerd[1473]: time="2024-10-08T19:43:47.987610505Z" level=info msg="CreateContainer within sandbox \"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:43:48.008301 containerd[1473]: time="2024-10-08T19:43:48.008252802Z" level=info msg="CreateContainer within sandbox \"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4ec61f67d3ddcc2451d3b23812fa89e772525857fe9c05e4de88f1efa652011e\"" Oct 8 19:43:48.010633 containerd[1473]: time="2024-10-08T19:43:48.010588218Z" level=info msg="StartContainer for \"4ec61f67d3ddcc2451d3b23812fa89e772525857fe9c05e4de88f1efa652011e\"" Oct 8 19:43:48.063161 systemd[1]: Started cri-containerd-4ec61f67d3ddcc2451d3b23812fa89e772525857fe9c05e4de88f1efa652011e.scope - libcontainer container 4ec61f67d3ddcc2451d3b23812fa89e772525857fe9c05e4de88f1efa652011e. Oct 8 19:43:48.126957 containerd[1473]: time="2024-10-08T19:43:48.126868348Z" level=info msg="StartContainer for \"4ec61f67d3ddcc2451d3b23812fa89e772525857fe9c05e4de88f1efa652011e\" returns successfully" Oct 8 19:43:48.134387 containerd[1473]: time="2024-10-08T19:43:48.134348769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:43:48.215465 systemd-networkd[1366]: calif45aa66db35: Link UP Oct 8 19:43:48.217177 systemd-networkd[1366]: calif45aa66db35: Gained carrier Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.049 [INFO][4239] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0 coredns-6f6b679f8f- kube-system a9aae756-4b47-473b-9e46-2501e5f0f460 694 0 2024-10-08 19:43:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-0-004c89fa14 coredns-6f6b679f8f-pj827 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif45aa66db35 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.049 [INFO][4239] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.126 [INFO][4293] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" HandleID="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.160 [INFO][4293] ipam_plugin.go 270: Auto assigning IP ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" HandleID="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a9d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-0-004c89fa14", "pod":"coredns-6f6b679f8f-pj827", "timestamp":"2024-10-08 19:43:48.126296774 +0000 UTC"}, Hostname:"ci-3975-2-2-0-004c89fa14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.160 [INFO][4293] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.161 [INFO][4293] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.161 [INFO][4293] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-0-004c89fa14' Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.164 [INFO][4293] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.170 [INFO][4293] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.178 [INFO][4293] ipam.go 489: Trying affinity for 192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.181 [INFO][4293] ipam.go 155: Attempting to load block cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.185 [INFO][4293] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.185 [INFO][4293] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.188 [INFO][4293] ipam.go 1685: Creating new handle: k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0 Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.195 [INFO][4293] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.205 [INFO][4293] ipam.go 1216: Successfully claimed IPs: [192.168.23.131/26] block=192.168.23.128/26 handle="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.205 [INFO][4293] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.131/26] handle="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.205 [INFO][4293] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:48.233758 containerd[1473]: 2024-10-08 19:43:48.205 [INFO][4293] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.131/26] IPv6=[] ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" HandleID="k8s-pod-network.02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:48.234648 containerd[1473]: 2024-10-08 19:43:48.209 [INFO][4239] k8s.go 386: Populated endpoint ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a9aae756-4b47-473b-9e46-2501e5f0f460", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"", Pod:"coredns-6f6b679f8f-pj827", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45aa66db35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:48.234648 containerd[1473]: 2024-10-08 19:43:48.209 [INFO][4239] k8s.go 387: Calico CNI using IPs: [192.168.23.131/32] ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:48.234648 containerd[1473]: 2024-10-08 19:43:48.209 [INFO][4239] dataplane_linux.go 68: Setting the host side veth name to calif45aa66db35 ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:48.234648 containerd[1473]: 2024-10-08 19:43:48.218 [INFO][4239] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:48.234648 containerd[1473]: 2024-10-08 19:43:48.219 [INFO][4239] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a9aae756-4b47-473b-9e46-2501e5f0f460", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0", Pod:"coredns-6f6b679f8f-pj827", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45aa66db35", MAC:"9e:55:9a:e4:ee:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:48.234648 containerd[1473]: 2024-10-08 19:43:48.232 [INFO][4239] k8s.go 500: Wrote updated endpoint to datastore ContainerID="02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0" Namespace="kube-system" Pod="coredns-6f6b679f8f-pj827" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:43:48.263405 containerd[1473]: time="2024-10-08T19:43:48.263122640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:48.263405 containerd[1473]: time="2024-10-08T19:43:48.263251924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:48.263405 containerd[1473]: time="2024-10-08T19:43:48.263288925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:48.263405 containerd[1473]: time="2024-10-08T19:43:48.263317325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:48.292021 systemd[1]: Started cri-containerd-02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0.scope - libcontainer container 02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0. Oct 8 19:43:48.333897 systemd-networkd[1366]: calie6d43fda396: Link UP Oct 8 19:43:48.334391 systemd-networkd[1366]: calie6d43fda396: Gained carrier Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.043 [INFO][4248] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0 calico-kube-controllers-7569f59856- calico-system 554c64d3-a559-43d2-9d88-70455c3e4442 695 0 2024-10-08 19:43:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7569f59856 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975-2-2-0-004c89fa14 calico-kube-controllers-7569f59856-kdnqq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie6d43fda396 [] []}} ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.046 [INFO][4248] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.122 [INFO][4284] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" HandleID="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.160 [INFO][4284] ipam_plugin.go 270: Auto assigning IP ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" HandleID="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003162f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-0-004c89fa14", "pod":"calico-kube-controllers-7569f59856-kdnqq", "timestamp":"2024-10-08 19:43:48.121411216 +0000 UTC"}, Hostname:"ci-3975-2-2-0-004c89fa14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.160 [INFO][4284] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.205 [INFO][4284] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.206 [INFO][4284] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-0-004c89fa14' Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.267 [INFO][4284] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.278 [INFO][4284] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.290 [INFO][4284] ipam.go 489: Trying affinity for 192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.295 [INFO][4284] ipam.go 155: Attempting to load block cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.302 [INFO][4284] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.303 [INFO][4284] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.306 [INFO][4284] ipam.go 1685: Creating new handle: k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.316 [INFO][4284] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.327 [INFO][4284] ipam.go 1216: Successfully claimed IPs: [192.168.23.132/26] block=192.168.23.128/26 handle="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.327 [INFO][4284] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.132/26] handle="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.327 [INFO][4284] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:48.355209 containerd[1473]: 2024-10-08 19:43:48.327 [INFO][4284] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.132/26] IPv6=[] ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" HandleID="k8s-pod-network.d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:48.356168 containerd[1473]: 2024-10-08 19:43:48.330 [INFO][4248] k8s.go 386: Populated endpoint ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0", GenerateName:"calico-kube-controllers-7569f59856-", Namespace:"calico-system", SelfLink:"", UID:"554c64d3-a559-43d2-9d88-70455c3e4442", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7569f59856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"", Pod:"calico-kube-controllers-7569f59856-kdnqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6d43fda396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:48.356168 containerd[1473]: 2024-10-08 19:43:48.330 [INFO][4248] k8s.go 387: Calico CNI using IPs: [192.168.23.132/32] ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:48.356168 containerd[1473]: 2024-10-08 19:43:48.330 [INFO][4248] dataplane_linux.go 68: Setting the host side veth name to calie6d43fda396 ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:48.356168 containerd[1473]: 2024-10-08 19:43:48.332 [INFO][4248] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:48.356168 containerd[1473]: 2024-10-08 19:43:48.333 [INFO][4248] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0", GenerateName:"calico-kube-controllers-7569f59856-", Namespace:"calico-system", SelfLink:"", UID:"554c64d3-a559-43d2-9d88-70455c3e4442", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7569f59856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e", Pod:"calico-kube-controllers-7569f59856-kdnqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6d43fda396", MAC:"f6:5d:3b:a5:9e:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:48.356168 containerd[1473]: 2024-10-08 19:43:48.351 [INFO][4248] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e" Namespace="calico-system" Pod="calico-kube-controllers-7569f59856-kdnqq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:43:48.384936 containerd[1473]: time="2024-10-08T19:43:48.384833901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-pj827,Uid:a9aae756-4b47-473b-9e46-2501e5f0f460,Namespace:kube-system,Attempt:1,} returns sandbox id \"02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0\"" Oct 8 19:43:48.392045 containerd[1473]: time="2024-10-08T19:43:48.391988834Z" level=info msg="CreateContainer within sandbox \"02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:43:48.402791 containerd[1473]: time="2024-10-08T19:43:48.402599931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:48.403071 containerd[1473]: time="2024-10-08T19:43:48.402797576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:48.405349 containerd[1473]: time="2024-10-08T19:43:48.403582995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:48.405349 containerd[1473]: time="2024-10-08T19:43:48.403606955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:48.414363 containerd[1473]: time="2024-10-08T19:43:48.414236412Z" level=info msg="CreateContainer within sandbox \"02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8809f4094482e4d27eb5142a0f99942bfb8a7fad9ae4cc7baef48fecdf8284b7\"" Oct 8 19:43:48.415124 containerd[1473]: time="2024-10-08T19:43:48.414984990Z" level=info msg="StartContainer for \"8809f4094482e4d27eb5142a0f99942bfb8a7fad9ae4cc7baef48fecdf8284b7\"" Oct 8 19:43:48.430324 systemd[1]: Started cri-containerd-d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e.scope - libcontainer container d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e. Oct 8 19:43:48.460279 systemd[1]: Started cri-containerd-8809f4094482e4d27eb5142a0f99942bfb8a7fad9ae4cc7baef48fecdf8284b7.scope - libcontainer container 8809f4094482e4d27eb5142a0f99942bfb8a7fad9ae4cc7baef48fecdf8284b7. Oct 8 19:43:48.491337 containerd[1473]: time="2024-10-08T19:43:48.491229272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7569f59856-kdnqq,Uid:554c64d3-a559-43d2-9d88-70455c3e4442,Namespace:calico-system,Attempt:1,} returns sandbox id \"d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e\"" Oct 8 19:43:48.499661 containerd[1473]: time="2024-10-08T19:43:48.499614155Z" level=info msg="StartContainer for \"8809f4094482e4d27eb5142a0f99942bfb8a7fad9ae4cc7baef48fecdf8284b7\" returns successfully" Oct 8 19:43:48.957216 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Oct 8 19:43:48.970225 kubelet[2660]: I1008 19:43:48.970131 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pj827" podStartSLOduration=32.970111404 podStartE2EDuration="32.970111404s" podCreationTimestamp="2024-10-08 19:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:48.937159008 +0000 UTC m=+39.349089181" watchObservedRunningTime="2024-10-08 19:43:48.970111404 +0000 UTC m=+39.382041537" Oct 8 19:43:49.468396 systemd-networkd[1366]: calie6d43fda396: Gained IPv6LL Oct 8 19:43:50.235421 systemd-networkd[1366]: calif45aa66db35: Gained IPv6LL Oct 8 19:43:51.355392 containerd[1473]: time="2024-10-08T19:43:51.354417379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:51.362349 containerd[1473]: time="2024-10-08T19:43:51.362301853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:43:51.363491 containerd[1473]: time="2024-10-08T19:43:51.363456921Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:51.365706 containerd[1473]: time="2024-10-08T19:43:51.365656735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:51.366546 containerd[1473]: time="2024-10-08T19:43:51.366507556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 3.232113546s" Oct 8 19:43:51.366709 containerd[1473]: time="2024-10-08T19:43:51.366686121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:43:51.368340 containerd[1473]: time="2024-10-08T19:43:51.368308401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:43:51.370003 containerd[1473]: time="2024-10-08T19:43:51.369279145Z" level=info msg="CreateContainer within sandbox \"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:43:51.389537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount289629753.mount: Deactivated successfully. Oct 8 19:43:51.396394 containerd[1473]: time="2024-10-08T19:43:51.396346851Z" level=info msg="CreateContainer within sandbox \"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fe0d76c33c07f68e14bfd378ad1d9f27a08c3af6fe9bca956aaa0460d5991c73\"" Oct 8 19:43:51.397881 containerd[1473]: time="2024-10-08T19:43:51.397838768Z" level=info msg="StartContainer for \"fe0d76c33c07f68e14bfd378ad1d9f27a08c3af6fe9bca956aaa0460d5991c73\"" Oct 8 19:43:51.438175 systemd[1]: Started cri-containerd-fe0d76c33c07f68e14bfd378ad1d9f27a08c3af6fe9bca956aaa0460d5991c73.scope - libcontainer container fe0d76c33c07f68e14bfd378ad1d9f27a08c3af6fe9bca956aaa0460d5991c73. Oct 8 19:43:51.468721 containerd[1473]: time="2024-10-08T19:43:51.468669873Z" level=info msg="StartContainer for \"fe0d76c33c07f68e14bfd378ad1d9f27a08c3af6fe9bca956aaa0460d5991c73\" returns successfully" Oct 8 19:43:51.805892 kubelet[2660]: I1008 19:43:51.805435 2660 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:43:51.805892 kubelet[2660]: I1008 19:43:51.805468 2660 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:43:51.941087 kubelet[2660]: I1008 19:43:51.940759 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4qp66" podStartSLOduration=23.789360407 podStartE2EDuration="28.94073494s" podCreationTimestamp="2024-10-08 19:43:23 +0000 UTC" firstStartedPulling="2024-10-08 19:43:46.21620709 +0000 UTC m=+36.628137263" lastFinishedPulling="2024-10-08 19:43:51.367581623 +0000 UTC m=+41.779511796" observedRunningTime="2024-10-08 19:43:51.939604472 +0000 UTC m=+42.351534725" watchObservedRunningTime="2024-10-08 19:43:51.94073494 +0000 UTC m=+42.352665113" Oct 8 19:43:54.117844 containerd[1473]: time="2024-10-08T19:43:54.117791804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:54.120057 containerd[1473]: time="2024-10-08T19:43:54.120000899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:43:54.120964 containerd[1473]: time="2024-10-08T19:43:54.120503632Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:54.125529 containerd[1473]: time="2024-10-08T19:43:54.124579694Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:54.126270 containerd[1473]: time="2024-10-08T19:43:54.125820765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 2.757470684s" Oct 8 19:43:54.126270 containerd[1473]: time="2024-10-08T19:43:54.125865966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:43:54.142374 containerd[1473]: time="2024-10-08T19:43:54.142290058Z" level=info msg="CreateContainer within sandbox \"d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:43:54.170290 containerd[1473]: time="2024-10-08T19:43:54.170153956Z" level=info msg="CreateContainer within sandbox \"d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9edc680675d54cfdd51ed69b224f2b738bd4baa521a42412d39caa06e0cb5c62\"" Oct 8 19:43:54.171147 containerd[1473]: time="2024-10-08T19:43:54.170881254Z" level=info msg="StartContainer for \"9edc680675d54cfdd51ed69b224f2b738bd4baa521a42412d39caa06e0cb5c62\"" Oct 8 19:43:54.213127 systemd[1]: Started cri-containerd-9edc680675d54cfdd51ed69b224f2b738bd4baa521a42412d39caa06e0cb5c62.scope - libcontainer container 9edc680675d54cfdd51ed69b224f2b738bd4baa521a42412d39caa06e0cb5c62. Oct 8 19:43:54.310760 containerd[1473]: time="2024-10-08T19:43:54.310574594Z" level=info msg="StartContainer for \"9edc680675d54cfdd51ed69b224f2b738bd4baa521a42412d39caa06e0cb5c62\" returns successfully" Oct 8 19:43:55.046231 kubelet[2660]: I1008 19:43:55.046166 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7569f59856-kdnqq" podStartSLOduration=26.410174901 podStartE2EDuration="32.04614927s" podCreationTimestamp="2024-10-08 19:43:23 +0000 UTC" firstStartedPulling="2024-10-08 19:43:48.493157799 +0000 UTC m=+38.905087972" lastFinishedPulling="2024-10-08 19:43:54.129132208 +0000 UTC m=+44.541062341" observedRunningTime="2024-10-08 19:43:54.990765236 +0000 UTC m=+45.402695409" watchObservedRunningTime="2024-10-08 19:43:55.04614927 +0000 UTC m=+45.458079443" Oct 8 19:44:09.725047 containerd[1473]: time="2024-10-08T19:44:09.724437280Z" level=info msg="StopPodSandbox for \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\"" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.769 [WARNING][4619] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36d90f88-3457-4191-84d5-72e6469f1596", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88", Pod:"csi-node-driver-4qp66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliabe63571239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.769 [INFO][4619] k8s.go 608: Cleaning up netns ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.769 [INFO][4619] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" iface="eth0" netns="" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.769 [INFO][4619] k8s.go 615: Releasing IP address(es) ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.769 [INFO][4619] utils.go 188: Calico CNI releasing IP address ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.795 [INFO][4625] ipam_plugin.go 417: Releasing address using handleID ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.795 [INFO][4625] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.795 [INFO][4625] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.807 [WARNING][4625] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.808 [INFO][4625] ipam_plugin.go 445: Releasing address using workloadID ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.809 [INFO][4625] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:09.812597 containerd[1473]: 2024-10-08 19:44:09.811 [INFO][4619] k8s.go 621: Teardown processing complete. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.813228 containerd[1473]: time="2024-10-08T19:44:09.812637351Z" level=info msg="TearDown network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\" successfully" Oct 8 19:44:09.813228 containerd[1473]: time="2024-10-08T19:44:09.812665112Z" level=info msg="StopPodSandbox for \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\" returns successfully" Oct 8 19:44:09.813973 containerd[1473]: time="2024-10-08T19:44:09.813770662Z" level=info msg="RemovePodSandbox for \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\"" Oct 8 19:44:09.817532 containerd[1473]: time="2024-10-08T19:44:09.813836663Z" level=info msg="Forcibly stopping sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\"" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.859 [WARNING][4644] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36d90f88-3457-4191-84d5-72e6469f1596", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"d24851f1cf4aaf0a610aeb40570555e9a8eebe6919960110c96c086696f93f88", Pod:"csi-node-driver-4qp66", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.23.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliabe63571239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.859 [INFO][4644] k8s.go 608: Cleaning up netns ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.859 [INFO][4644] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" iface="eth0" netns="" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.859 [INFO][4644] k8s.go 615: Releasing IP address(es) ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.859 [INFO][4644] utils.go 188: Calico CNI releasing IP address ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.879 [INFO][4650] ipam_plugin.go 417: Releasing address using handleID ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.880 [INFO][4650] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.880 [INFO][4650] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.892 [WARNING][4650] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.892 [INFO][4650] ipam_plugin.go 445: Releasing address using workloadID ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" HandleID="k8s-pod-network.815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Workload="ci--3975--2--2--0--004c89fa14-k8s-csi--node--driver--4qp66-eth0" Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.896 [INFO][4650] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:09.900957 containerd[1473]: 2024-10-08 19:44:09.899 [INFO][4644] k8s.go 621: Teardown processing complete. ContainerID="815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8" Oct 8 19:44:09.901487 containerd[1473]: time="2024-10-08T19:44:09.901004548Z" level=info msg="TearDown network for sandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\" successfully" Oct 8 19:44:09.905100 containerd[1473]: time="2024-10-08T19:44:09.905019575Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:09.905100 containerd[1473]: time="2024-10-08T19:44:09.905109217Z" level=info msg="RemovePodSandbox \"815ee595e3d6ba11ea600857f6bb97cbc76e9ff9dd025f04a70ae167f189fdf8\" returns successfully" Oct 8 19:44:09.905998 containerd[1473]: time="2024-10-08T19:44:09.905813436Z" level=info msg="StopPodSandbox for \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\"" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.953 [WARNING][4668] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0", GenerateName:"calico-kube-controllers-7569f59856-", Namespace:"calico-system", SelfLink:"", UID:"554c64d3-a559-43d2-9d88-70455c3e4442", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7569f59856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e", Pod:"calico-kube-controllers-7569f59856-kdnqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6d43fda396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.953 [INFO][4668] k8s.go 608: Cleaning up netns ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.953 [INFO][4668] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" iface="eth0" netns="" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.953 [INFO][4668] k8s.go 615: Releasing IP address(es) ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.953 [INFO][4668] utils.go 188: Calico CNI releasing IP address ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.987 [INFO][4674] ipam_plugin.go 417: Releasing address using handleID ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.988 [INFO][4674] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.988 [INFO][4674] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.998 [WARNING][4674] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:09.998 [INFO][4674] ipam_plugin.go 445: Releasing address using workloadID ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:10.001 [INFO][4674] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:10.006557 containerd[1473]: 2024-10-08 19:44:10.004 [INFO][4668] k8s.go 621: Teardown processing complete. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.007818 containerd[1473]: time="2024-10-08T19:44:10.007173899Z" level=info msg="TearDown network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\" successfully" Oct 8 19:44:10.007818 containerd[1473]: time="2024-10-08T19:44:10.007210100Z" level=info msg="StopPodSandbox for \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\" returns successfully" Oct 8 19:44:10.008152 containerd[1473]: time="2024-10-08T19:44:10.007769275Z" level=info msg="RemovePodSandbox for \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\"" Oct 8 19:44:10.009671 containerd[1473]: time="2024-10-08T19:44:10.008115724Z" level=info msg="Forcibly stopping sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\"" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.060 [WARNING][4692] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0", GenerateName:"calico-kube-controllers-7569f59856-", Namespace:"calico-system", SelfLink:"", UID:"554c64d3-a559-43d2-9d88-70455c3e4442", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7569f59856", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"d56ffb8ab1d3ac9f779982854e03ee35ab7540875dded3bd725a9018a704be6e", Pod:"calico-kube-controllers-7569f59856-kdnqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie6d43fda396", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.060 [INFO][4692] k8s.go 608: Cleaning up netns ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.060 [INFO][4692] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" iface="eth0" netns="" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.060 [INFO][4692] k8s.go 615: Releasing IP address(es) ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.060 [INFO][4692] utils.go 188: Calico CNI releasing IP address ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.085 [INFO][4698] ipam_plugin.go 417: Releasing address using handleID ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.085 [INFO][4698] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.085 [INFO][4698] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.096 [WARNING][4698] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.097 [INFO][4698] ipam_plugin.go 445: Releasing address using workloadID ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" HandleID="k8s-pod-network.95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--kube--controllers--7569f59856--kdnqq-eth0" Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.100 [INFO][4698] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:10.105187 containerd[1473]: 2024-10-08 19:44:10.103 [INFO][4692] k8s.go 621: Teardown processing complete. ContainerID="95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc" Oct 8 19:44:10.105668 containerd[1473]: time="2024-10-08T19:44:10.105226522Z" level=info msg="TearDown network for sandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\" successfully" Oct 8 19:44:10.108746 containerd[1473]: time="2024-10-08T19:44:10.108664814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:10.108869 containerd[1473]: time="2024-10-08T19:44:10.108787977Z" level=info msg="RemovePodSandbox \"95418ba50408d1750b0d59856a7ea6ba833d17289606ef5aeb54453e748c8fdc\" returns successfully" Oct 8 19:44:10.109772 containerd[1473]: time="2024-10-08T19:44:10.109398913Z" level=info msg="StopPodSandbox for \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\"" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.166 [WARNING][4716] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ce72720-be2a-4924-bded-181fc38d374d", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e", Pod:"coredns-6f6b679f8f-drcv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5248bcce10b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.167 [INFO][4716] k8s.go 608: Cleaning up netns ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.167 [INFO][4716] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" iface="eth0" netns="" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.167 [INFO][4716] k8s.go 615: Releasing IP address(es) ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.167 [INFO][4716] utils.go 188: Calico CNI releasing IP address ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.190 [INFO][4722] ipam_plugin.go 417: Releasing address using handleID ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.190 [INFO][4722] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.190 [INFO][4722] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.202 [WARNING][4722] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.202 [INFO][4722] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.204 [INFO][4722] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:10.208126 containerd[1473]: 2024-10-08 19:44:10.206 [INFO][4716] k8s.go 621: Teardown processing complete. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.208801 containerd[1473]: time="2024-10-08T19:44:10.208168715Z" level=info msg="TearDown network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\" successfully" Oct 8 19:44:10.208801 containerd[1473]: time="2024-10-08T19:44:10.208197556Z" level=info msg="StopPodSandbox for \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\" returns successfully" Oct 8 19:44:10.208801 containerd[1473]: time="2024-10-08T19:44:10.208705409Z" level=info msg="RemovePodSandbox for \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\"" Oct 8 19:44:10.208801 containerd[1473]: time="2024-10-08T19:44:10.208744250Z" level=info msg="Forcibly stopping sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\"" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.260 [WARNING][4740] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"0ce72720-be2a-4924-bded-181fc38d374d", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"4f91c9a5c5c1262d9af2d2357f130f638dce3359c2fdb8c55d95ba23fe20c54e", Pod:"coredns-6f6b679f8f-drcv7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5248bcce10b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.260 [INFO][4740] k8s.go 608: Cleaning up netns ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.260 [INFO][4740] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" iface="eth0" netns="" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.260 [INFO][4740] k8s.go 615: Releasing IP address(es) ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.261 [INFO][4740] utils.go 188: Calico CNI releasing IP address ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.281 [INFO][4746] ipam_plugin.go 417: Releasing address using handleID ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.282 [INFO][4746] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.282 [INFO][4746] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.290 [WARNING][4746] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.290 [INFO][4746] ipam_plugin.go 445: Releasing address using workloadID ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" HandleID="k8s-pod-network.4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--drcv7-eth0" Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.292 [INFO][4746] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:10.297219 containerd[1473]: 2024-10-08 19:44:10.295 [INFO][4740] k8s.go 621: Teardown processing complete. ContainerID="4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48" Oct 8 19:44:10.297219 containerd[1473]: time="2024-10-08T19:44:10.297183416Z" level=info msg="TearDown network for sandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\" successfully" Oct 8 19:44:10.301514 containerd[1473]: time="2024-10-08T19:44:10.301458490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:10.301608 containerd[1473]: time="2024-10-08T19:44:10.301535052Z" level=info msg="RemovePodSandbox \"4769be1e380cefa05ed14959caa9c980ecf8e698110e1d274c48924b81289f48\" returns successfully" Oct 8 19:44:10.302140 containerd[1473]: time="2024-10-08T19:44:10.302101467Z" level=info msg="StopPodSandbox for \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\"" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.342 [WARNING][4764] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a9aae756-4b47-473b-9e46-2501e5f0f460", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0", Pod:"coredns-6f6b679f8f-pj827", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45aa66db35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.343 [INFO][4764] k8s.go 608: Cleaning up netns ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.343 [INFO][4764] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" iface="eth0" netns="" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.343 [INFO][4764] k8s.go 615: Releasing IP address(es) ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.343 [INFO][4764] utils.go 188: Calico CNI releasing IP address ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.372 [INFO][4770] ipam_plugin.go 417: Releasing address using handleID ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.374 [INFO][4770] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.374 [INFO][4770] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.393 [WARNING][4770] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.393 [INFO][4770] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.397 [INFO][4770] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:10.400519 containerd[1473]: 2024-10-08 19:44:10.398 [INFO][4764] k8s.go 621: Teardown processing complete. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.400519 containerd[1473]: time="2024-10-08T19:44:10.400397096Z" level=info msg="TearDown network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\" successfully" Oct 8 19:44:10.400519 containerd[1473]: time="2024-10-08T19:44:10.400421577Z" level=info msg="StopPodSandbox for \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\" returns successfully" Oct 8 19:44:10.401324 containerd[1473]: time="2024-10-08T19:44:10.401284480Z" level=info msg="RemovePodSandbox for \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\"" Oct 8 19:44:10.401373 containerd[1473]: time="2024-10-08T19:44:10.401326281Z" level=info msg="Forcibly stopping sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\"" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.445 [WARNING][4789] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"a9aae756-4b47-473b-9e46-2501e5f0f460", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"02cb4bfeddc10cf0b88be689e8412d29dd3901feb135f409f99d427d10809cf0", Pod:"coredns-6f6b679f8f-pj827", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif45aa66db35", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.446 [INFO][4789] k8s.go 608: Cleaning up netns ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.446 [INFO][4789] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" iface="eth0" netns="" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.446 [INFO][4789] k8s.go 615: Releasing IP address(es) ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.446 [INFO][4789] utils.go 188: Calico CNI releasing IP address ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.465 [INFO][4795] ipam_plugin.go 417: Releasing address using handleID ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.466 [INFO][4795] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.466 [INFO][4795] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.476 [WARNING][4795] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.476 [INFO][4795] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" HandleID="k8s-pod-network.b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Workload="ci--3975--2--2--0--004c89fa14-k8s-coredns--6f6b679f8f--pj827-eth0" Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.478 [INFO][4795] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:10.483734 containerd[1473]: 2024-10-08 19:44:10.481 [INFO][4789] k8s.go 621: Teardown processing complete. ContainerID="b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300" Oct 8 19:44:10.484628 containerd[1473]: time="2024-10-08T19:44:10.483833768Z" level=info msg="TearDown network for sandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\" successfully" Oct 8 19:44:10.488410 containerd[1473]: time="2024-10-08T19:44:10.488319888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:10.488537 containerd[1473]: time="2024-10-08T19:44:10.488424691Z" level=info msg="RemovePodSandbox \"b648acde43c4b398717bb05c7e523f90036b2e464b1164ee45709a0e49e77300\" returns successfully" Oct 8 19:44:17.312630 systemd[1]: Created slice kubepods-besteffort-pod57d10145_a594_49bc_bfed_f97bbe86b467.slice - libcontainer container kubepods-besteffort-pod57d10145_a594_49bc_bfed_f97bbe86b467.slice. Oct 8 19:44:17.332187 systemd[1]: Created slice kubepods-besteffort-pod81a964c5_535b_42a7_a861_5eb046c191a4.slice - libcontainer container kubepods-besteffort-pod81a964c5_535b_42a7_a861_5eb046c191a4.slice. Oct 8 19:44:17.385282 kubelet[2660]: I1008 19:44:17.385162 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/81a964c5-535b-42a7-a861-5eb046c191a4-calico-apiserver-certs\") pod \"calico-apiserver-86f58ccb79-5qvmq\" (UID: \"81a964c5-535b-42a7-a861-5eb046c191a4\") " pod="calico-apiserver/calico-apiserver-86f58ccb79-5qvmq" Oct 8 19:44:17.385282 kubelet[2660]: I1008 19:44:17.385276 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjtgd\" (UniqueName: \"kubernetes.io/projected/81a964c5-535b-42a7-a861-5eb046c191a4-kube-api-access-bjtgd\") pod \"calico-apiserver-86f58ccb79-5qvmq\" (UID: \"81a964c5-535b-42a7-a861-5eb046c191a4\") " pod="calico-apiserver/calico-apiserver-86f58ccb79-5qvmq" Oct 8 19:44:17.386577 kubelet[2660]: I1008 19:44:17.385416 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/57d10145-a594-49bc-bfed-f97bbe86b467-calico-apiserver-certs\") pod \"calico-apiserver-86f58ccb79-xws6w\" (UID: \"57d10145-a594-49bc-bfed-f97bbe86b467\") " pod="calico-apiserver/calico-apiserver-86f58ccb79-xws6w" Oct 8 19:44:17.386577 kubelet[2660]: I1008 19:44:17.385496 2660 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48r5s\" (UniqueName: \"kubernetes.io/projected/57d10145-a594-49bc-bfed-f97bbe86b467-kube-api-access-48r5s\") pod \"calico-apiserver-86f58ccb79-xws6w\" (UID: \"57d10145-a594-49bc-bfed-f97bbe86b467\") " pod="calico-apiserver/calico-apiserver-86f58ccb79-xws6w" Oct 8 19:44:17.621806 containerd[1473]: time="2024-10-08T19:44:17.621665945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f58ccb79-xws6w,Uid:57d10145-a594-49bc-bfed-f97bbe86b467,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:44:17.640760 containerd[1473]: time="2024-10-08T19:44:17.640702824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f58ccb79-5qvmq,Uid:81a964c5-535b-42a7-a861-5eb046c191a4,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:44:17.812097 systemd-networkd[1366]: cali202e70d4845: Link UP Oct 8 19:44:17.813443 systemd-networkd[1366]: cali202e70d4845: Gained carrier Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.689 [INFO][4837] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0 calico-apiserver-86f58ccb79- calico-apiserver 57d10145-a594-49bc-bfed-f97bbe86b467 840 0 2024-10-08 19:44:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f58ccb79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-0-004c89fa14 calico-apiserver-86f58ccb79-xws6w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali202e70d4845 [] []}} ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.689 [INFO][4837] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.741 [INFO][4859] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" HandleID="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.760 [INFO][4859] ipam_plugin.go 270: Auto assigning IP ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" HandleID="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316cb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-0-004c89fa14", "pod":"calico-apiserver-86f58ccb79-xws6w", "timestamp":"2024-10-08 19:44:17.741890342 +0000 UTC"}, Hostname:"ci-3975-2-2-0-004c89fa14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.760 [INFO][4859] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.760 [INFO][4859] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.760 [INFO][4859] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-0-004c89fa14' Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.764 [INFO][4859] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.772 [INFO][4859] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.780 [INFO][4859] ipam.go 489: Trying affinity for 192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.783 [INFO][4859] ipam.go 155: Attempting to load block cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.786 [INFO][4859] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.786 [INFO][4859] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.788 [INFO][4859] ipam.go 1685: Creating new handle: k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.798 [INFO][4859] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.807 [INFO][4859] ipam.go 1216: Successfully claimed IPs: [192.168.23.133/26] block=192.168.23.128/26 handle="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.807 [INFO][4859] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.133/26] handle="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.807 [INFO][4859] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:17.837316 containerd[1473]: 2024-10-08 19:44:17.807 [INFO][4859] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.133/26] IPv6=[] ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" HandleID="k8s-pod-network.48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" Oct 8 19:44:17.837890 containerd[1473]: 2024-10-08 19:44:17.809 [INFO][4837] k8s.go 386: Populated endpoint ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0", GenerateName:"calico-apiserver-86f58ccb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"57d10145-a594-49bc-bfed-f97bbe86b467", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f58ccb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"", Pod:"calico-apiserver-86f58ccb79-xws6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali202e70d4845", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:17.837890 containerd[1473]: 2024-10-08 19:44:17.809 [INFO][4837] k8s.go 387: Calico CNI using IPs: [192.168.23.133/32] ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" Oct 8 19:44:17.837890 containerd[1473]: 2024-10-08 19:44:17.809 [INFO][4837] dataplane_linux.go 68: Setting the host side veth name to cali202e70d4845 ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" Oct 8 19:44:17.837890 containerd[1473]: 2024-10-08 19:44:17.813 [INFO][4837] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" Oct 8 19:44:17.837890 containerd[1473]: 2024-10-08 19:44:17.814 [INFO][4837] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0", GenerateName:"calico-apiserver-86f58ccb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"57d10145-a594-49bc-bfed-f97bbe86b467", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f58ccb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace", Pod:"calico-apiserver-86f58ccb79-xws6w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali202e70d4845", MAC:"16:db:58:ee:fa:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:17.837890 containerd[1473]: 2024-10-08 19:44:17.832 [INFO][4837] k8s.go 500: Wrote updated endpoint to datastore ContainerID="48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-xws6w" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--xws6w-eth0" Oct 8 19:44:17.864361 containerd[1473]: time="2024-10-08T19:44:17.864184715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:17.864361 containerd[1473]: time="2024-10-08T19:44:17.864256477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:17.864361 containerd[1473]: time="2024-10-08T19:44:17.864288558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:17.864361 containerd[1473]: time="2024-10-08T19:44:17.864302678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:17.886146 systemd[1]: Started cri-containerd-48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace.scope - libcontainer container 48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace. Oct 8 19:44:17.928053 systemd-networkd[1366]: cali38a8ebccb0c: Link UP Oct 8 19:44:17.928270 systemd-networkd[1366]: cali38a8ebccb0c: Gained carrier Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.718 [INFO][4844] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0 calico-apiserver-86f58ccb79- calico-apiserver 81a964c5-535b-42a7-a861-5eb046c191a4 844 0 2024-10-08 19:44:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:86f58ccb79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-0-004c89fa14 calico-apiserver-86f58ccb79-5qvmq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38a8ebccb0c [] []}} ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.718 [INFO][4844] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.753 [INFO][4863] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" HandleID="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.775 [INFO][4863] ipam_plugin.go 270: Auto assigning IP ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" HandleID="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005986e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-0-004c89fa14", "pod":"calico-apiserver-86f58ccb79-5qvmq", "timestamp":"2024-10-08 19:44:17.753404255 +0000 UTC"}, Hostname:"ci-3975-2-2-0-004c89fa14", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.775 [INFO][4863] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.807 [INFO][4863] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.807 [INFO][4863] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-0-004c89fa14' Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.867 [INFO][4863] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.883 [INFO][4863] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.891 [INFO][4863] ipam.go 489: Trying affinity for 192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.894 [INFO][4863] ipam.go 155: Attempting to load block cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.897 [INFO][4863] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.23.128/26 host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.898 [INFO][4863] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.23.128/26 handle="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.901 [INFO][4863] ipam.go 1685: Creating new handle: k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2 Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.911 [INFO][4863] ipam.go 1203: Writing block in order to claim IPs block=192.168.23.128/26 handle="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.919 [INFO][4863] ipam.go 1216: Successfully claimed IPs: [192.168.23.134/26] block=192.168.23.128/26 handle="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.919 [INFO][4863] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.23.134/26] handle="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" host="ci-3975-2-2-0-004c89fa14" Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.919 [INFO][4863] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:17.954516 containerd[1473]: 2024-10-08 19:44:17.919 [INFO][4863] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.23.134/26] IPv6=[] ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" HandleID="k8s-pod-network.4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Workload="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" Oct 8 19:44:17.955174 containerd[1473]: 2024-10-08 19:44:17.922 [INFO][4844] k8s.go 386: Populated endpoint ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0", GenerateName:"calico-apiserver-86f58ccb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"81a964c5-535b-42a7-a861-5eb046c191a4", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f58ccb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"", Pod:"calico-apiserver-86f58ccb79-5qvmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38a8ebccb0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:17.955174 containerd[1473]: 2024-10-08 19:44:17.924 [INFO][4844] k8s.go 387: Calico CNI using IPs: [192.168.23.134/32] ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" Oct 8 19:44:17.955174 containerd[1473]: 2024-10-08 19:44:17.924 [INFO][4844] dataplane_linux.go 68: Setting the host side veth name to cali38a8ebccb0c ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" Oct 8 19:44:17.955174 containerd[1473]: 2024-10-08 19:44:17.929 [INFO][4844] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" Oct 8 19:44:17.955174 containerd[1473]: 2024-10-08 19:44:17.931 [INFO][4844] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0", GenerateName:"calico-apiserver-86f58ccb79-", Namespace:"calico-apiserver", SelfLink:"", UID:"81a964c5-535b-42a7-a861-5eb046c191a4", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"86f58ccb79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-0-004c89fa14", ContainerID:"4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2", Pod:"calico-apiserver-86f58ccb79-5qvmq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38a8ebccb0c", MAC:"0a:4f:f2:37:a1:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:17.955174 containerd[1473]: 2024-10-08 19:44:17.947 [INFO][4844] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2" Namespace="calico-apiserver" Pod="calico-apiserver-86f58ccb79-5qvmq" WorkloadEndpoint="ci--3975--2--2--0--004c89fa14-k8s-calico--apiserver--86f58ccb79--5qvmq-eth0" Oct 8 19:44:17.969904 containerd[1473]: time="2024-10-08T19:44:17.969169016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f58ccb79-xws6w,Uid:57d10145-a594-49bc-bfed-f97bbe86b467,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace\"" Oct 8 19:44:17.974531 containerd[1473]: time="2024-10-08T19:44:17.974448640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:44:17.993733 containerd[1473]: time="2024-10-08T19:44:17.993339675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:17.993733 containerd[1473]: time="2024-10-08T19:44:17.993409117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:17.993733 containerd[1473]: time="2024-10-08T19:44:17.993428157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:17.993733 containerd[1473]: time="2024-10-08T19:44:17.993456158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:18.027319 systemd[1]: Started cri-containerd-4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2.scope - libcontainer container 4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2. Oct 8 19:44:18.076168 containerd[1473]: time="2024-10-08T19:44:18.076078135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-86f58ccb79-5qvmq,Uid:81a964c5-535b-42a7-a861-5eb046c191a4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2\"" Oct 8 19:44:18.971164 systemd-networkd[1366]: cali202e70d4845: Gained IPv6LL Oct 8 19:44:19.227155 systemd-networkd[1366]: cali38a8ebccb0c: Gained IPv6LL Oct 8 19:44:21.902968 containerd[1473]: time="2024-10-08T19:44:21.902136526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:21.903759 containerd[1473]: time="2024-10-08T19:44:21.903703089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:44:21.905192 containerd[1473]: time="2024-10-08T19:44:21.905114288Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:21.909382 containerd[1473]: time="2024-10-08T19:44:21.908934793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:21.911115 containerd[1473]: time="2024-10-08T19:44:21.911049011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 3.93653529s" Oct 8 19:44:21.911322 containerd[1473]: time="2024-10-08T19:44:21.911292018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:44:21.939100 containerd[1473]: time="2024-10-08T19:44:21.939049781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:44:21.978670 containerd[1473]: time="2024-10-08T19:44:21.978512626Z" level=info msg="CreateContainer within sandbox \"48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:44:21.992928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709776306.mount: Deactivated successfully. Oct 8 19:44:21.995027 containerd[1473]: time="2024-10-08T19:44:21.994975599Z" level=info msg="CreateContainer within sandbox \"48a97b0263a0725236b8f843a0d1a847f966c8bcb70659a48ad3f40c47548ace\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"474c273803909b061ee24f909f3004b0b9752c0df4896f0ac97a1491aa8dd536\"" Oct 8 19:44:21.999989 containerd[1473]: time="2024-10-08T19:44:21.999446322Z" level=info msg="StartContainer for \"474c273803909b061ee24f909f3004b0b9752c0df4896f0ac97a1491aa8dd536\"" Oct 8 19:44:22.051272 systemd[1]: Started cri-containerd-474c273803909b061ee24f909f3004b0b9752c0df4896f0ac97a1491aa8dd536.scope - libcontainer container 474c273803909b061ee24f909f3004b0b9752c0df4896f0ac97a1491aa8dd536. Oct 8 19:44:22.104088 containerd[1473]: time="2024-10-08T19:44:22.103927881Z" level=info msg="StartContainer for \"474c273803909b061ee24f909f3004b0b9752c0df4896f0ac97a1491aa8dd536\" returns successfully" Oct 8 19:44:22.341817 containerd[1473]: time="2024-10-08T19:44:22.340927891Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:22.343258 containerd[1473]: time="2024-10-08T19:44:22.343218955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 19:44:22.344972 containerd[1473]: time="2024-10-08T19:44:22.344905761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 405.805339ms" Oct 8 19:44:22.345137 containerd[1473]: time="2024-10-08T19:44:22.345115127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:44:22.356533 containerd[1473]: time="2024-10-08T19:44:22.356481000Z" level=info msg="CreateContainer within sandbox \"4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:44:22.374133 containerd[1473]: time="2024-10-08T19:44:22.372708967Z" level=info msg="CreateContainer within sandbox \"4c160812078fd2ea9e76d063ef887fd4bde8c006df8a657e88ba4135e186edf2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a9e4d5b792410a52a51ee62a6205b311cb5e1c10672262c5b14ebfe1393d60b\"" Oct 8 19:44:22.376666 containerd[1473]: time="2024-10-08T19:44:22.376315267Z" level=info msg="StartContainer for \"1a9e4d5b792410a52a51ee62a6205b311cb5e1c10672262c5b14ebfe1393d60b\"" Oct 8 19:44:22.416192 systemd[1]: Started cri-containerd-1a9e4d5b792410a52a51ee62a6205b311cb5e1c10672262c5b14ebfe1393d60b.scope - libcontainer container 1a9e4d5b792410a52a51ee62a6205b311cb5e1c10672262c5b14ebfe1393d60b. Oct 8 19:44:22.454254 containerd[1473]: time="2024-10-08T19:44:22.454167772Z" level=info msg="StartContainer for \"1a9e4d5b792410a52a51ee62a6205b311cb5e1c10672262c5b14ebfe1393d60b\" returns successfully" Oct 8 19:44:22.990265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765193650.mount: Deactivated successfully. Oct 8 19:44:23.133306 kubelet[2660]: I1008 19:44:23.133229 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f58ccb79-xws6w" podStartSLOduration=2.168369023 podStartE2EDuration="6.13320953s" podCreationTimestamp="2024-10-08 19:44:17 +0000 UTC" firstStartedPulling="2024-10-08 19:44:17.974154832 +0000 UTC m=+68.386085005" lastFinishedPulling="2024-10-08 19:44:21.938995379 +0000 UTC m=+72.350925512" observedRunningTime="2024-10-08 19:44:23.130898427 +0000 UTC m=+73.542828600" watchObservedRunningTime="2024-10-08 19:44:23.13320953 +0000 UTC m=+73.545139703" Oct 8 19:44:24.130767 kubelet[2660]: I1008 19:44:24.130672 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-86f58ccb79-5qvmq" podStartSLOduration=2.8631160060000003 podStartE2EDuration="7.130645397s" podCreationTimestamp="2024-10-08 19:44:17 +0000 UTC" firstStartedPulling="2024-10-08 19:44:18.078570963 +0000 UTC m=+68.490501176" lastFinishedPulling="2024-10-08 19:44:22.346100394 +0000 UTC m=+72.758030567" observedRunningTime="2024-10-08 19:44:23.185833503 +0000 UTC m=+73.597763676" watchObservedRunningTime="2024-10-08 19:44:24.130645397 +0000 UTC m=+74.542575570" Oct 8 19:44:44.241123 systemd[1]: run-containerd-runc-k8s.io-9edc680675d54cfdd51ed69b224f2b738bd4baa521a42412d39caa06e0cb5c62-runc.VUQva9.mount: Deactivated successfully. Oct 8 19:48:00.953096 systemd[1]: Started sshd@7-188.245.170.239:22-139.178.89.65:48244.service - OpenSSH per-connection server daemon (139.178.89.65:48244). Oct 8 19:48:01.951684 sshd[5639]: Accepted publickey for core from 139.178.89.65 port 48244 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:01.955474 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:01.964080 systemd-logind[1454]: New session 8 of user core. Oct 8 19:48:01.968306 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:48:02.741750 sshd[5639]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:02.747489 systemd[1]: sshd@7-188.245.170.239:22-139.178.89.65:48244.service: Deactivated successfully. Oct 8 19:48:02.751578 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:48:02.754323 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:48:02.755698 systemd-logind[1454]: Removed session 8. Oct 8 19:48:07.909581 systemd[1]: Started sshd@8-188.245.170.239:22-139.178.89.65:48898.service - OpenSSH per-connection server daemon (139.178.89.65:48898). Oct 8 19:48:08.901716 sshd[5653]: Accepted publickey for core from 139.178.89.65 port 48898 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:08.904144 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:08.911793 systemd-logind[1454]: New session 9 of user core. Oct 8 19:48:08.917119 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:48:09.650288 sshd[5653]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:09.655713 systemd[1]: sshd@8-188.245.170.239:22-139.178.89.65:48898.service: Deactivated successfully. Oct 8 19:48:09.658669 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:48:09.659867 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:48:09.661583 systemd-logind[1454]: Removed session 9. Oct 8 19:48:14.822404 systemd[1]: Started sshd@9-188.245.170.239:22-139.178.89.65:48908.service - OpenSSH per-connection server daemon (139.178.89.65:48908). Oct 8 19:48:15.810283 sshd[5674]: Accepted publickey for core from 139.178.89.65 port 48908 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:15.812429 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:15.820729 systemd-logind[1454]: New session 10 of user core. Oct 8 19:48:15.828283 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:48:16.577294 sshd[5674]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:16.582167 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:48:16.583429 systemd[1]: sshd@9-188.245.170.239:22-139.178.89.65:48908.service: Deactivated successfully. Oct 8 19:48:16.587296 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:48:16.590728 systemd-logind[1454]: Removed session 10. Oct 8 19:48:16.758241 systemd[1]: Started sshd@10-188.245.170.239:22-139.178.89.65:54472.service - OpenSSH per-connection server daemon (139.178.89.65:54472). Oct 8 19:48:17.734627 sshd[5708]: Accepted publickey for core from 139.178.89.65 port 54472 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:17.736604 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:17.742624 systemd-logind[1454]: New session 11 of user core. Oct 8 19:48:17.750132 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:48:18.561787 sshd[5708]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:18.568378 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:48:18.569499 systemd[1]: sshd@10-188.245.170.239:22-139.178.89.65:54472.service: Deactivated successfully. Oct 8 19:48:18.573587 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:48:18.575665 systemd-logind[1454]: Removed session 11. Oct 8 19:48:18.739194 systemd[1]: Started sshd@11-188.245.170.239:22-139.178.89.65:54478.service - OpenSSH per-connection server daemon (139.178.89.65:54478). Oct 8 19:48:19.729998 sshd[5721]: Accepted publickey for core from 139.178.89.65 port 54478 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:19.731845 sshd[5721]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:19.737885 systemd-logind[1454]: New session 12 of user core. Oct 8 19:48:19.743173 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:48:20.490250 sshd[5721]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:20.495817 systemd[1]: sshd@11-188.245.170.239:22-139.178.89.65:54478.service: Deactivated successfully. Oct 8 19:48:20.499300 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:48:20.503225 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:48:20.506294 systemd-logind[1454]: Removed session 12. Oct 8 19:48:25.668360 systemd[1]: Started sshd@12-188.245.170.239:22-139.178.89.65:32846.service - OpenSSH per-connection server daemon (139.178.89.65:32846). Oct 8 19:48:26.643665 sshd[5761]: Accepted publickey for core from 139.178.89.65 port 32846 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:26.646339 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:26.653949 systemd-logind[1454]: New session 13 of user core. Oct 8 19:48:26.661144 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:48:27.404338 sshd[5761]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:27.409471 systemd[1]: sshd@12-188.245.170.239:22-139.178.89.65:32846.service: Deactivated successfully. Oct 8 19:48:27.414687 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:48:27.416700 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:48:27.418655 systemd-logind[1454]: Removed session 13. Oct 8 19:48:32.584352 systemd[1]: Started sshd@13-188.245.170.239:22-139.178.89.65:32862.service - OpenSSH per-connection server daemon (139.178.89.65:32862). Oct 8 19:48:33.564803 sshd[5791]: Accepted publickey for core from 139.178.89.65 port 32862 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:33.567511 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:33.574146 systemd-logind[1454]: New session 14 of user core. Oct 8 19:48:33.581278 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:48:34.318764 sshd[5791]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:34.323628 systemd[1]: sshd@13-188.245.170.239:22-139.178.89.65:32862.service: Deactivated successfully. Oct 8 19:48:34.326528 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:48:34.327497 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:48:34.328837 systemd-logind[1454]: Removed session 14. Oct 8 19:48:39.499535 systemd[1]: Started sshd@14-188.245.170.239:22-139.178.89.65:37918.service - OpenSSH per-connection server daemon (139.178.89.65:37918). Oct 8 19:48:40.489640 sshd[5803]: Accepted publickey for core from 139.178.89.65 port 37918 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:40.491988 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:40.498100 systemd-logind[1454]: New session 15 of user core. Oct 8 19:48:40.503124 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:48:41.252858 sshd[5803]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:41.257071 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:48:41.258261 systemd[1]: sshd@14-188.245.170.239:22-139.178.89.65:37918.service: Deactivated successfully. Oct 8 19:48:41.260736 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:48:41.262856 systemd-logind[1454]: Removed session 15. Oct 8 19:48:41.432604 systemd[1]: Started sshd@15-188.245.170.239:22-139.178.89.65:37928.service - OpenSSH per-connection server daemon (139.178.89.65:37928). Oct 8 19:48:42.433112 sshd[5816]: Accepted publickey for core from 139.178.89.65 port 37928 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:42.435018 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:42.441028 systemd-logind[1454]: New session 16 of user core. Oct 8 19:48:42.453159 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:48:43.405370 sshd[5816]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:43.409483 systemd[1]: sshd@15-188.245.170.239:22-139.178.89.65:37928.service: Deactivated successfully. Oct 8 19:48:43.413742 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:48:43.416227 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:48:43.417387 systemd-logind[1454]: Removed session 16. Oct 8 19:48:43.580342 systemd[1]: Started sshd@16-188.245.170.239:22-139.178.89.65:37940.service - OpenSSH per-connection server daemon (139.178.89.65:37940). Oct 8 19:48:44.565603 sshd[5833]: Accepted publickey for core from 139.178.89.65 port 37940 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:44.568824 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:44.576496 systemd-logind[1454]: New session 17 of user core. Oct 8 19:48:44.587275 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:48:47.296882 sshd[5833]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:47.301671 systemd[1]: sshd@16-188.245.170.239:22-139.178.89.65:37940.service: Deactivated successfully. Oct 8 19:48:47.304394 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:48:47.307540 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:48:47.308725 systemd-logind[1454]: Removed session 17. Oct 8 19:48:47.471261 systemd[1]: Started sshd@17-188.245.170.239:22-139.178.89.65:33748.service - OpenSSH per-connection server daemon (139.178.89.65:33748). Oct 8 19:48:48.472113 sshd[5894]: Accepted publickey for core from 139.178.89.65 port 33748 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:48.474200 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:48.480638 systemd-logind[1454]: New session 18 of user core. Oct 8 19:48:48.494276 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:48:49.379759 sshd[5894]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:49.387219 systemd[1]: sshd@17-188.245.170.239:22-139.178.89.65:33748.service: Deactivated successfully. Oct 8 19:48:49.391380 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:48:49.392879 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:48:49.395291 systemd-logind[1454]: Removed session 18. Oct 8 19:48:49.556344 systemd[1]: Started sshd@18-188.245.170.239:22-139.178.89.65:33750.service - OpenSSH per-connection server daemon (139.178.89.65:33750). Oct 8 19:48:50.570814 sshd[5906]: Accepted publickey for core from 139.178.89.65 port 33750 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:50.571587 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:50.579074 systemd-logind[1454]: New session 19 of user core. Oct 8 19:48:50.584229 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:48:50.988763 systemd[1]: run-containerd-runc-k8s.io-9edc680675d54cfdd51ed69b224f2b738bd4baa521a42412d39caa06e0cb5c62-runc.ZjrcYF.mount: Deactivated successfully. Oct 8 19:48:51.346334 sshd[5906]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:51.351106 systemd[1]: sshd@18-188.245.170.239:22-139.178.89.65:33750.service: Deactivated successfully. Oct 8 19:48:51.356409 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:48:51.360537 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:48:51.363401 systemd-logind[1454]: Removed session 19. Oct 8 19:48:56.524272 systemd[1]: Started sshd@19-188.245.170.239:22-139.178.89.65:41746.service - OpenSSH per-connection server daemon (139.178.89.65:41746). Oct 8 19:48:57.517067 sshd[5947]: Accepted publickey for core from 139.178.89.65 port 41746 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:57.519309 sshd[5947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:57.525316 systemd-logind[1454]: New session 20 of user core. Oct 8 19:48:57.534161 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:48:58.275456 sshd[5947]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:58.280196 systemd[1]: sshd@19-188.245.170.239:22-139.178.89.65:41746.service: Deactivated successfully. Oct 8 19:48:58.284267 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:48:58.285446 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:48:58.286741 systemd-logind[1454]: Removed session 20. Oct 8 19:49:03.459246 systemd[1]: Started sshd@20-188.245.170.239:22-139.178.89.65:41760.service - OpenSSH per-connection server daemon (139.178.89.65:41760). Oct 8 19:49:04.476417 sshd[5960]: Accepted publickey for core from 139.178.89.65 port 41760 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:49:04.477323 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:49:04.484165 systemd-logind[1454]: New session 21 of user core. Oct 8 19:49:04.490109 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:49:05.259333 sshd[5960]: pam_unix(sshd:session): session closed for user core Oct 8 19:49:05.265749 systemd[1]: sshd@20-188.245.170.239:22-139.178.89.65:41760.service: Deactivated successfully. Oct 8 19:49:05.269392 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:49:05.270518 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:49:05.271650 systemd-logind[1454]: Removed session 21. Oct 8 19:49:20.851091 systemd[1]: cri-containerd-b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c.scope: Deactivated successfully. Oct 8 19:49:20.851589 systemd[1]: cri-containerd-b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c.scope: Consumed 7.835s CPU time. Oct 8 19:49:20.881268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c-rootfs.mount: Deactivated successfully. Oct 8 19:49:20.882561 containerd[1473]: time="2024-10-08T19:49:20.881799107Z" level=info msg="shim disconnected" id=b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c namespace=k8s.io Oct 8 19:49:20.882561 containerd[1473]: time="2024-10-08T19:49:20.882223477Z" level=warning msg="cleaning up after shim disconnected" id=b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c namespace=k8s.io Oct 8 19:49:20.882561 containerd[1473]: time="2024-10-08T19:49:20.882248438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:49:21.311814 kubelet[2660]: E1008 19:49:21.311388 2660 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:32860->10.0.0.2:2379: read: connection timed out" Oct 8 19:49:21.506985 systemd[1]: cri-containerd-5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614.scope: Deactivated successfully. Oct 8 19:49:21.507319 systemd[1]: cri-containerd-5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614.scope: Consumed 5.360s CPU time, 17.7M memory peak, 0B memory swap peak. Oct 8 19:49:21.534241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614-rootfs.mount: Deactivated successfully. Oct 8 19:49:21.536546 containerd[1473]: time="2024-10-08T19:49:21.536436795Z" level=info msg="shim disconnected" id=5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614 namespace=k8s.io Oct 8 19:49:21.536546 containerd[1473]: time="2024-10-08T19:49:21.536525637Z" level=warning msg="cleaning up after shim disconnected" id=5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614 namespace=k8s.io Oct 8 19:49:21.536644 containerd[1473]: time="2024-10-08T19:49:21.536547357Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:49:21.889702 kubelet[2660]: I1008 19:49:21.889573 2660 scope.go:117] "RemoveContainer" containerID="5b0e88c62db15ea05e3efe8212ba8a92cd2bde242889ecf04cc55dd82812d614" Oct 8 19:49:21.894695 kubelet[2660]: I1008 19:49:21.894600 2660 scope.go:117] "RemoveContainer" containerID="b6183d9571bbed224e68ab7228b599a46c3db63fcb3020d8a6b9bd6853232d3c" Oct 8 19:49:21.895780 containerd[1473]: time="2024-10-08T19:49:21.895594458Z" level=info msg="CreateContainer within sandbox \"1abbd0e511f102a2195285741cd564758172bac5234ef0ab91786be364bb2625\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 19:49:21.911321 containerd[1473]: time="2024-10-08T19:49:21.911255561Z" level=info msg="CreateContainer within sandbox \"fa30be7bb6c4189504bdd648064c91fa31bb61de085a6a9954833f331ccf9773\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 19:49:21.917245 containerd[1473]: time="2024-10-08T19:49:21.917172945Z" level=info msg="CreateContainer within sandbox \"1abbd0e511f102a2195285741cd564758172bac5234ef0ab91786be364bb2625\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"067f897edd6c2320ea4f7f1bf74ca77af4ed25fb734b587470f07814f7778b12\"" Oct 8 19:49:21.919885 containerd[1473]: time="2024-10-08T19:49:21.917985245Z" level=info msg="StartContainer for \"067f897edd6c2320ea4f7f1bf74ca77af4ed25fb734b587470f07814f7778b12\"" Oct 8 19:49:21.933623 containerd[1473]: time="2024-10-08T19:49:21.933549266Z" level=info msg="CreateContainer within sandbox \"fa30be7bb6c4189504bdd648064c91fa31bb61de085a6a9954833f331ccf9773\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"249c2efa44bc8e998621f7941c131b85e39ffa98a580450766588416ef49b4ca\"" Oct 8 19:49:21.936049 containerd[1473]: time="2024-10-08T19:49:21.936008406Z" level=info msg="StartContainer for \"249c2efa44bc8e998621f7941c131b85e39ffa98a580450766588416ef49b4ca\"" Oct 8 19:49:21.960115 systemd[1]: Started cri-containerd-067f897edd6c2320ea4f7f1bf74ca77af4ed25fb734b587470f07814f7778b12.scope - libcontainer container 067f897edd6c2320ea4f7f1bf74ca77af4ed25fb734b587470f07814f7778b12. Oct 8 19:49:21.983219 systemd[1]: Started cri-containerd-249c2efa44bc8e998621f7941c131b85e39ffa98a580450766588416ef49b4ca.scope - libcontainer container 249c2efa44bc8e998621f7941c131b85e39ffa98a580450766588416ef49b4ca. Oct 8 19:49:22.025944 containerd[1473]: time="2024-10-08T19:49:22.025863083Z" level=info msg="StartContainer for \"067f897edd6c2320ea4f7f1bf74ca77af4ed25fb734b587470f07814f7778b12\" returns successfully" Oct 8 19:49:22.029027 containerd[1473]: time="2024-10-08T19:49:22.028835596Z" level=info msg="StartContainer for \"249c2efa44bc8e998621f7941c131b85e39ffa98a580450766588416ef49b4ca\" returns successfully" Oct 8 19:49:24.841860 kubelet[2660]: E1008 19:49:24.829782 2660 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60884->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3975-2-2-0-004c89fa14.17fc920ffe11d352 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3975-2-2-0-004c89fa14,UID:6ed01128afb4dca86862a2e64f324ade,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-0-004c89fa14,},FirstTimestamp:2024-10-08 19:49:14.38817365 +0000 UTC m=+364.800103823,LastTimestamp:2024-10-08 19:49:14.38817365 +0000 UTC m=+364.800103823,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-0-004c89fa14,}"