Oct 9 01:01:30.959791 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 9 01:01:30.959814 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 23:34:40 -00 2024 Oct 9 01:01:30.959824 kernel: KASLR enabled Oct 9 01:01:30.959830 kernel: efi: EFI v2.7 by EDK II Oct 9 01:01:30.959835 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Oct 9 01:01:30.959841 kernel: random: crng init done Oct 9 01:01:30.959848 kernel: secureboot: Secure boot disabled Oct 9 01:01:30.959854 kernel: ACPI: Early table checksum verification disabled Oct 9 01:01:30.959859 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Oct 9 01:01:30.959865 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 9 01:01:30.959873 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.959879 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.959884 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.959890 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.959898 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.959905 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.959912 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.959918 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.962003 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:01:30.962018 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 9 01:01:30.962025 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 9 01:01:30.962031 kernel: NUMA: Failed to initialise from firmware Oct 9 01:01:30.962038 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 9 01:01:30.962044 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Oct 9 01:01:30.962051 kernel: Zone ranges: Oct 9 01:01:30.962057 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 9 01:01:30.962069 kernel: DMA32 empty Oct 9 01:01:30.962075 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 9 01:01:30.962081 kernel: Movable zone start for each node Oct 9 01:01:30.962087 kernel: Early memory node ranges Oct 9 01:01:30.962094 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Oct 9 01:01:30.962100 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Oct 9 01:01:30.962106 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Oct 9 01:01:30.962113 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Oct 9 01:01:30.962119 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Oct 9 01:01:30.962125 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 9 01:01:30.962131 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 9 01:01:30.962139 kernel: psci: probing for conduit method from ACPI. Oct 9 01:01:30.962146 kernel: psci: PSCIv1.1 detected in firmware. Oct 9 01:01:30.962152 kernel: psci: Using standard PSCI v0.2 function IDs Oct 9 01:01:30.962161 kernel: psci: Trusted OS migration not required Oct 9 01:01:30.962168 kernel: psci: SMC Calling Convention v1.1 Oct 9 01:01:30.962175 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 9 01:01:30.962183 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 9 01:01:30.962190 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 9 01:01:30.962196 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 9 01:01:30.962203 kernel: Detected PIPT I-cache on CPU0 Oct 9 01:01:30.962210 kernel: CPU features: detected: GIC system register CPU interface Oct 9 01:01:30.962217 kernel: CPU features: detected: Hardware dirty bit management Oct 9 01:01:30.962223 kernel: CPU features: detected: Spectre-v4 Oct 9 01:01:30.962230 kernel: CPU features: detected: Spectre-BHB Oct 9 01:01:30.962236 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 9 01:01:30.962243 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 9 01:01:30.962250 kernel: CPU features: detected: ARM erratum 1418040 Oct 9 01:01:30.962258 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 9 01:01:30.962265 kernel: alternatives: applying boot alternatives Oct 9 01:01:30.962273 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:01:30.962281 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:01:30.962287 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 01:01:30.962294 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:01:30.962301 kernel: Fallback order for Node 0: 0 Oct 9 01:01:30.962307 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Oct 9 01:01:30.962314 kernel: Policy zone: Normal Oct 9 01:01:30.962321 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:01:30.962327 kernel: software IO TLB: area num 2. Oct 9 01:01:30.962335 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Oct 9 01:01:30.962342 kernel: Memory: 3881464K/4096000K available (10240K kernel code, 2184K rwdata, 8092K rodata, 39552K init, 897K bss, 214536K reserved, 0K cma-reserved) Oct 9 01:01:30.962349 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 01:01:30.962356 kernel: trace event string verifier disabled Oct 9 01:01:30.962362 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:01:30.962370 kernel: rcu: RCU event tracing is enabled. Oct 9 01:01:30.962376 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 01:01:30.962383 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:01:30.962390 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:01:30.962396 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:01:30.962403 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 01:01:30.962411 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 9 01:01:30.962418 kernel: GICv3: 256 SPIs implemented Oct 9 01:01:30.962424 kernel: GICv3: 0 Extended SPIs implemented Oct 9 01:01:30.962431 kernel: Root IRQ handler: gic_handle_irq Oct 9 01:01:30.962437 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 9 01:01:30.962444 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 9 01:01:30.962451 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 9 01:01:30.962457 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 9 01:01:30.962464 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Oct 9 01:01:30.962471 kernel: GICv3: using LPI property table @0x00000001000e0000 Oct 9 01:01:30.962477 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Oct 9 01:01:30.962484 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:01:30.962492 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:01:30.962499 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 9 01:01:30.962506 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 9 01:01:30.962512 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 9 01:01:30.962519 kernel: Console: colour dummy device 80x25 Oct 9 01:01:30.962526 kernel: ACPI: Core revision 20230628 Oct 9 01:01:30.962533 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 9 01:01:30.962540 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:01:30.962547 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:01:30.962554 kernel: landlock: Up and running. Oct 9 01:01:30.962562 kernel: SELinux: Initializing. Oct 9 01:01:30.962569 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:01:30.962575 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:01:30.962582 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:01:30.962589 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:01:30.962596 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:01:30.962603 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:01:30.962610 kernel: Platform MSI: ITS@0x8080000 domain created Oct 9 01:01:30.962616 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 9 01:01:30.962624 kernel: Remapping and enabling EFI services. Oct 9 01:01:30.962631 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:01:30.962638 kernel: Detected PIPT I-cache on CPU1 Oct 9 01:01:30.962645 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 9 01:01:30.962652 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Oct 9 01:01:30.962659 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:01:30.962666 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 9 01:01:30.962673 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 01:01:30.962680 kernel: SMP: Total of 2 processors activated. Oct 9 01:01:30.962688 kernel: CPU features: detected: 32-bit EL0 Support Oct 9 01:01:30.962695 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 9 01:01:30.962707 kernel: CPU features: detected: Common not Private translations Oct 9 01:01:30.962716 kernel: CPU features: detected: CRC32 instructions Oct 9 01:01:30.962723 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 9 01:01:30.962730 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 9 01:01:30.962738 kernel: CPU features: detected: LSE atomic instructions Oct 9 01:01:30.962745 kernel: CPU features: detected: Privileged Access Never Oct 9 01:01:30.962752 kernel: CPU features: detected: RAS Extension Support Oct 9 01:01:30.962761 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 9 01:01:30.962768 kernel: CPU: All CPU(s) started at EL1 Oct 9 01:01:30.962775 kernel: alternatives: applying system-wide alternatives Oct 9 01:01:30.962782 kernel: devtmpfs: initialized Oct 9 01:01:30.962790 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:01:30.962797 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 01:01:30.962804 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:01:30.962811 kernel: SMBIOS 3.0.0 present. Oct 9 01:01:30.962820 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 9 01:01:30.962827 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:01:30.962834 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 9 01:01:30.962842 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 9 01:01:30.962849 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 9 01:01:30.962857 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:01:30.962864 kernel: audit: type=2000 audit(0.013:1): state=initialized audit_enabled=0 res=1 Oct 9 01:01:30.962871 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:01:30.962878 kernel: cpuidle: using governor menu Oct 9 01:01:30.962887 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 9 01:01:30.962894 kernel: ASID allocator initialised with 32768 entries Oct 9 01:01:30.962901 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:01:30.962909 kernel: Serial: AMBA PL011 UART driver Oct 9 01:01:30.962916 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 9 01:01:30.962934 kernel: Modules: 0 pages in range for non-PLT usage Oct 9 01:01:30.962941 kernel: Modules: 508992 pages in range for PLT usage Oct 9 01:01:30.962948 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:01:30.962956 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:01:30.962965 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 9 01:01:30.962972 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 9 01:01:30.962979 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:01:30.962987 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:01:30.962994 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 9 01:01:30.963001 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 9 01:01:30.963008 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:01:30.963015 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:01:30.963023 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:01:30.963032 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:01:30.963039 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:01:30.963046 kernel: ACPI: Interpreter enabled Oct 9 01:01:30.963054 kernel: ACPI: Using GIC for interrupt routing Oct 9 01:01:30.963061 kernel: ACPI: MCFG table detected, 1 entries Oct 9 01:01:30.963068 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 9 01:01:30.963075 kernel: printk: console [ttyAMA0] enabled Oct 9 01:01:30.963082 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:01:30.963256 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:01:30.963333 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 01:01:30.963397 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 01:01:30.963458 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 9 01:01:30.963520 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 9 01:01:30.963529 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 9 01:01:30.963537 kernel: PCI host bridge to bus 0000:00 Oct 9 01:01:30.963606 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 9 01:01:30.963666 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 9 01:01:30.963723 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 9 01:01:30.963780 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:01:30.963859 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 9 01:01:30.966023 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Oct 9 01:01:30.966127 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Oct 9 01:01:30.966201 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Oct 9 01:01:30.966278 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.966343 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Oct 9 01:01:30.966416 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.966481 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Oct 9 01:01:30.966552 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.966621 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Oct 9 01:01:30.966691 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.966755 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Oct 9 01:01:30.966826 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.966891 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Oct 9 01:01:30.968040 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.968125 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Oct 9 01:01:30.968199 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.968263 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Oct 9 01:01:30.968341 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.968404 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Oct 9 01:01:30.968473 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 9 01:01:30.968540 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Oct 9 01:01:30.968610 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Oct 9 01:01:30.968674 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Oct 9 01:01:30.968749 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 01:01:30.968816 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Oct 9 01:01:30.968883 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:01:30.970733 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 9 01:01:30.970973 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 9 01:01:30.971054 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Oct 9 01:01:30.971130 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 9 01:01:30.971215 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Oct 9 01:01:30.971285 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Oct 9 01:01:30.971363 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 9 01:01:30.971436 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Oct 9 01:01:30.971511 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 9 01:01:30.971578 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Oct 9 01:01:30.971663 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 9 01:01:30.971731 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Oct 9 01:01:30.971798 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Oct 9 01:01:30.971878 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 01:01:30.971964 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Oct 9 01:01:30.972032 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Oct 9 01:01:30.973016 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 9 01:01:30.973123 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 9 01:01:30.973191 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 9 01:01:30.973256 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 9 01:01:30.973331 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 9 01:01:30.973395 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 9 01:01:30.973459 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 9 01:01:30.973526 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 9 01:01:30.973591 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 9 01:01:30.973654 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 9 01:01:30.973722 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 9 01:01:30.973789 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 9 01:01:30.973852 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 9 01:01:30.973921 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 9 01:01:30.975144 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 9 01:01:30.975237 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Oct 9 01:01:30.975313 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 9 01:01:30.975380 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 9 01:01:30.975443 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 9 01:01:30.975516 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 9 01:01:30.977129 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 9 01:01:30.977211 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 9 01:01:30.977281 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 9 01:01:30.977347 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 9 01:01:30.977411 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 9 01:01:30.977481 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 9 01:01:30.977550 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 9 01:01:30.977616 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 9 01:01:30.977692 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Oct 9 01:01:30.977759 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Oct 9 01:01:30.977827 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Oct 9 01:01:30.977890 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Oct 9 01:01:30.978519 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Oct 9 01:01:30.978605 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Oct 9 01:01:30.978672 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Oct 9 01:01:30.978736 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Oct 9 01:01:30.978801 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Oct 9 01:01:30.978869 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Oct 9 01:01:30.979002 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Oct 9 01:01:30.979072 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 9 01:01:30.979140 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Oct 9 01:01:30.979303 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 9 01:01:30.979375 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Oct 9 01:01:30.979440 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 9 01:01:30.979509 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Oct 9 01:01:30.979577 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Oct 9 01:01:30.979650 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Oct 9 01:01:30.979724 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Oct 9 01:01:30.979793 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Oct 9 01:01:30.979859 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 9 01:01:30.979985 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Oct 9 01:01:30.980060 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 9 01:01:30.980125 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Oct 9 01:01:30.980231 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 9 01:01:30.980301 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Oct 9 01:01:30.980371 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 9 01:01:30.980435 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Oct 9 01:01:30.980498 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 9 01:01:30.980562 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Oct 9 01:01:30.980625 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 9 01:01:30.980688 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Oct 9 01:01:30.980750 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 9 01:01:30.980813 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Oct 9 01:01:30.980882 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 9 01:01:30.981029 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Oct 9 01:01:30.981098 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Oct 9 01:01:30.981167 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Oct 9 01:01:30.981239 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Oct 9 01:01:30.981305 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:01:30.981374 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Oct 9 01:01:30.981443 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 9 01:01:30.981509 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 9 01:01:30.981571 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 9 01:01:30.981641 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 9 01:01:30.981711 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Oct 9 01:01:30.981786 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 9 01:01:30.981852 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 9 01:01:30.981914 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 9 01:01:30.982005 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 9 01:01:30.982076 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Oct 9 01:01:30.982142 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Oct 9 01:01:30.982207 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 9 01:01:30.982270 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 9 01:01:30.982337 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 9 01:01:30.982399 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 9 01:01:30.982468 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Oct 9 01:01:30.982531 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 9 01:01:30.982594 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 9 01:01:30.982657 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 9 01:01:30.982719 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 9 01:01:30.982791 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Oct 9 01:01:30.982860 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 9 01:01:30.982963 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 9 01:01:30.983036 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 9 01:01:30.983099 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 9 01:01:30.983169 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Oct 9 01:01:30.983253 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Oct 9 01:01:30.983317 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 9 01:01:30.983379 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 9 01:01:30.983445 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 9 01:01:30.983508 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 9 01:01:30.983577 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Oct 9 01:01:30.983642 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Oct 9 01:01:30.983708 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Oct 9 01:01:30.983771 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 9 01:01:30.983834 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 9 01:01:30.983899 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 9 01:01:30.983994 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 9 01:01:30.984062 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 9 01:01:30.984126 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 9 01:01:30.984190 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 9 01:01:30.984254 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 9 01:01:30.984318 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 9 01:01:30.984381 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 9 01:01:30.984444 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 9 01:01:30.984512 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 9 01:01:30.984580 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 9 01:01:30.984639 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 9 01:01:30.984698 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 9 01:01:30.984766 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 9 01:01:30.984828 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 9 01:01:30.984888 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 9 01:01:30.985020 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 9 01:01:30.985084 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 9 01:01:30.985141 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 9 01:01:30.985207 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 9 01:01:30.985265 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 9 01:01:30.985322 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 9 01:01:30.985390 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 9 01:01:30.985448 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 9 01:01:30.985518 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 9 01:01:30.985582 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 9 01:01:30.985640 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 9 01:01:30.985698 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 9 01:01:30.985763 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 9 01:01:30.985824 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 9 01:01:30.985899 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 9 01:01:30.986052 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 9 01:01:30.986174 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 9 01:01:30.986240 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 9 01:01:30.986315 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 9 01:01:30.986406 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 9 01:01:30.986469 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 9 01:01:30.986533 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 9 01:01:30.986593 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 9 01:01:30.986651 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 9 01:01:30.986665 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 9 01:01:30.986673 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 9 01:01:30.986681 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 9 01:01:30.986689 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 9 01:01:30.986696 kernel: iommu: Default domain type: Translated Oct 9 01:01:30.986704 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 9 01:01:30.986711 kernel: efivars: Registered efivars operations Oct 9 01:01:30.986719 kernel: vgaarb: loaded Oct 9 01:01:30.986728 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 9 01:01:30.986736 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:01:30.986744 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:01:30.986752 kernel: pnp: PnP ACPI init Oct 9 01:01:30.986823 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 9 01:01:30.986835 kernel: pnp: PnP ACPI: found 1 devices Oct 9 01:01:30.986842 kernel: NET: Registered PF_INET protocol family Oct 9 01:01:30.986850 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 01:01:30.986858 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 01:01:30.986867 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:01:30.986875 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:01:30.986883 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 01:01:30.986891 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 01:01:30.986898 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:01:30.986906 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:01:30.986913 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:01:30.987119 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 9 01:01:30.987139 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:01:30.987147 kernel: kvm [1]: HYP mode not available Oct 9 01:01:30.987155 kernel: Initialise system trusted keyrings Oct 9 01:01:30.987163 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 01:01:30.987170 kernel: Key type asymmetric registered Oct 9 01:01:30.987280 kernel: Asymmetric key parser 'x509' registered Oct 9 01:01:30.987291 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 9 01:01:30.987299 kernel: io scheduler mq-deadline registered Oct 9 01:01:30.987307 kernel: io scheduler kyber registered Oct 9 01:01:30.987315 kernel: io scheduler bfq registered Oct 9 01:01:30.987328 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 9 01:01:30.987426 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 9 01:01:30.987495 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 9 01:01:30.987559 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.987625 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 9 01:01:30.987688 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 9 01:01:30.987756 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.987821 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 9 01:01:30.987886 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 9 01:01:30.990041 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.990136 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 9 01:01:30.990213 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 9 01:01:30.990282 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.990351 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 9 01:01:30.990415 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 9 01:01:30.990480 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.990547 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 9 01:01:30.990612 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 9 01:01:30.990679 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.990751 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 9 01:01:30.990820 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 9 01:01:30.990885 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.991012 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 9 01:01:30.991084 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 9 01:01:30.991153 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.991164 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 9 01:01:30.991247 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 9 01:01:30.991313 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 9 01:01:30.991377 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 9 01:01:30.991388 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 9 01:01:30.991396 kernel: ACPI: button: Power Button [PWRB] Oct 9 01:01:30.991407 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 9 01:01:30.991476 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Oct 9 01:01:30.991547 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 9 01:01:30.991633 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 9 01:01:30.991645 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:01:30.991653 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 9 01:01:30.991719 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 9 01:01:30.991730 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 9 01:01:30.991740 kernel: thunder_xcv, ver 1.0 Oct 9 01:01:30.991748 kernel: thunder_bgx, ver 1.0 Oct 9 01:01:30.991756 kernel: nicpf, ver 1.0 Oct 9 01:01:30.991763 kernel: nicvf, ver 1.0 Oct 9 01:01:30.991841 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 9 01:01:30.991903 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-09T01:01:30 UTC (1728435690) Oct 9 01:01:30.991913 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 01:01:30.991965 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 9 01:01:30.992416 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 9 01:01:30.992428 kernel: watchdog: Hard watchdog permanently disabled Oct 9 01:01:30.992437 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:01:30.992445 kernel: Segment Routing with IPv6 Oct 9 01:01:30.992452 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:01:30.992460 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:01:30.992468 kernel: Key type dns_resolver registered Oct 9 01:01:30.992476 kernel: registered taskstats version 1 Oct 9 01:01:30.992484 kernel: Loading compiled-in X.509 certificates Oct 9 01:01:30.992492 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 80611b0a9480eaf6d787b908c6349fdb5d07fa81' Oct 9 01:01:30.992504 kernel: Key type .fscrypt registered Oct 9 01:01:30.992512 kernel: Key type fscrypt-provisioning registered Oct 9 01:01:30.992520 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:01:30.992527 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:01:30.992535 kernel: ima: No architecture policies found Oct 9 01:01:30.992543 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 9 01:01:30.992551 kernel: clk: Disabling unused clocks Oct 9 01:01:30.992559 kernel: Freeing unused kernel memory: 39552K Oct 9 01:01:30.992568 kernel: Run /init as init process Oct 9 01:01:30.992576 kernel: with arguments: Oct 9 01:01:30.992583 kernel: /init Oct 9 01:01:30.992591 kernel: with environment: Oct 9 01:01:30.992598 kernel: HOME=/ Oct 9 01:01:30.992606 kernel: TERM=linux Oct 9 01:01:30.992614 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:01:30.992623 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:01:30.992635 systemd[1]: Detected virtualization kvm. Oct 9 01:01:30.992644 systemd[1]: Detected architecture arm64. Oct 9 01:01:30.992651 systemd[1]: Running in initrd. Oct 9 01:01:30.992659 systemd[1]: No hostname configured, using default hostname. Oct 9 01:01:30.992667 systemd[1]: Hostname set to . Oct 9 01:01:30.992677 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:01:30.992685 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:01:30.992693 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:01:30.992703 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:01:30.992712 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:01:30.992720 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:01:30.992728 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:01:30.992737 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:01:30.992747 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:01:30.992755 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:01:30.992765 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:01:30.992773 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:01:30.992781 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:01:30.992789 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:01:30.992797 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:01:30.992805 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:01:30.992821 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:01:30.992829 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:01:30.992839 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:01:30.992847 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:01:30.992855 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:01:30.992863 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:01:30.992871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:01:30.992879 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:01:30.992887 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:01:30.992895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:01:30.992904 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:01:30.992913 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:01:30.992934 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:01:30.992953 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:01:30.992961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:30.992969 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:01:30.992977 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:01:30.992985 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:01:30.993025 systemd-journald[236]: Collecting audit messages is disabled. Oct 9 01:01:30.993048 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:01:30.993057 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:30.993066 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:01:30.993075 kernel: Bridge firewalling registered Oct 9 01:01:30.993083 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:01:30.993100 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:01:30.993109 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:01:30.993119 systemd-journald[236]: Journal started Oct 9 01:01:30.993140 systemd-journald[236]: Runtime Journal (/run/log/journal/32d5ec6e1c9348baa4a6653788d6bb45) is 8.0M, max 76.5M, 68.5M free. Oct 9 01:01:30.995462 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:01:30.951896 systemd-modules-load[237]: Inserted module 'overlay' Oct 9 01:01:30.998476 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:01:30.998501 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:01:30.975583 systemd-modules-load[237]: Inserted module 'br_netfilter' Oct 9 01:01:31.010229 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:01:31.011034 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:01:31.015726 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:31.025195 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:01:31.026756 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:01:31.028462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:01:31.032060 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:01:31.041253 dracut-cmdline[269]: dracut-dracut-053 Oct 9 01:01:31.048095 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:01:31.072362 systemd-resolved[275]: Positive Trust Anchors: Oct 9 01:01:31.072433 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:01:31.072466 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:01:31.082141 systemd-resolved[275]: Defaulting to hostname 'linux'. Oct 9 01:01:31.083768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:01:31.084999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:01:31.128005 kernel: SCSI subsystem initialized Oct 9 01:01:31.132956 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:01:31.139978 kernel: iscsi: registered transport (tcp) Oct 9 01:01:31.160944 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:01:31.160986 kernel: QLogic iSCSI HBA Driver Oct 9 01:01:31.206880 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:01:31.212107 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:01:31.232184 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:01:31.232260 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:01:31.233799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:01:31.292974 kernel: raid6: neonx8 gen() 15650 MB/s Oct 9 01:01:31.310004 kernel: raid6: neonx4 gen() 15593 MB/s Oct 9 01:01:31.326963 kernel: raid6: neonx2 gen() 13202 MB/s Oct 9 01:01:31.343979 kernel: raid6: neonx1 gen() 10450 MB/s Oct 9 01:01:31.360963 kernel: raid6: int64x8 gen() 6874 MB/s Oct 9 01:01:31.378004 kernel: raid6: int64x4 gen() 7306 MB/s Oct 9 01:01:31.394990 kernel: raid6: int64x2 gen() 6099 MB/s Oct 9 01:01:31.411986 kernel: raid6: int64x1 gen() 5044 MB/s Oct 9 01:01:31.412068 kernel: raid6: using algorithm neonx8 gen() 15650 MB/s Oct 9 01:01:31.429030 kernel: raid6: .... xor() 11918 MB/s, rmw enabled Oct 9 01:01:31.429116 kernel: raid6: using neon recovery algorithm Oct 9 01:01:31.433975 kernel: xor: measuring software checksum speed Oct 9 01:01:31.434045 kernel: 8regs : 19807 MB/sec Oct 9 01:01:31.434058 kernel: 32regs : 19664 MB/sec Oct 9 01:01:31.434953 kernel: arm64_neon : 26927 MB/sec Oct 9 01:01:31.434973 kernel: xor: using function: arm64_neon (26927 MB/sec) Oct 9 01:01:31.489988 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:01:31.504626 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:01:31.511165 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:01:31.528457 systemd-udevd[455]: Using default interface naming scheme 'v255'. Oct 9 01:01:31.531802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:01:31.541150 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:01:31.555985 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Oct 9 01:01:31.588534 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:01:31.595085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:01:31.644506 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:01:31.650097 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:01:31.669866 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:01:31.673153 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:01:31.673752 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:01:31.674788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:01:31.684156 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:01:31.698412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:01:31.792603 kernel: scsi host0: Virtio SCSI HBA Oct 9 01:01:31.794542 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 01:01:31.794588 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 9 01:01:31.794623 kernel: ACPI: bus type USB registered Oct 9 01:01:31.794941 kernel: usbcore: registered new interface driver usbfs Oct 9 01:01:31.796039 kernel: usbcore: registered new interface driver hub Oct 9 01:01:31.796077 kernel: usbcore: registered new device driver usb Oct 9 01:01:31.809686 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:01:31.809813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:31.811796 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:01:31.812499 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:01:31.812638 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:31.813972 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:31.826178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:31.842708 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 01:01:31.842980 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 9 01:01:31.843080 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 9 01:01:31.843165 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 01:01:31.843411 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 9 01:01:31.844762 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 9 01:01:31.845403 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:31.850173 kernel: hub 1-0:1.0: USB hub found Oct 9 01:01:31.850347 kernel: hub 1-0:1.0: 4 ports detected Oct 9 01:01:31.850429 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 9 01:01:31.850149 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:01:31.852312 kernel: hub 2-0:1.0: USB hub found Oct 9 01:01:31.852471 kernel: hub 2-0:1.0: 4 ports detected Oct 9 01:01:31.863007 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 9 01:01:31.864174 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 9 01:01:31.864347 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 01:01:31.865952 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 9 01:01:31.877955 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 9 01:01:31.878985 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 9 01:01:31.879143 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 9 01:01:31.880330 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 9 01:01:31.880493 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 9 01:01:31.883408 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:31.889111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:01:31.889162 kernel: GPT:17805311 != 80003071 Oct 9 01:01:31.889172 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:01:31.890239 kernel: GPT:17805311 != 80003071 Oct 9 01:01:31.890271 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:01:31.890282 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:01:31.891960 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 9 01:01:31.927961 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (523) Oct 9 01:01:31.933189 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 9 01:01:31.937951 kernel: BTRFS: device fsid c25b3a2f-539f-42a7-8842-97b35e474647 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (514) Oct 9 01:01:31.949234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 01:01:31.954623 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 9 01:01:31.961891 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 9 01:01:31.962720 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 9 01:01:31.969127 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:01:31.975246 disk-uuid[580]: Primary Header is updated. Oct 9 01:01:31.975246 disk-uuid[580]: Secondary Entries is updated. Oct 9 01:01:31.975246 disk-uuid[580]: Secondary Header is updated. Oct 9 01:01:31.980946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:01:32.090988 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 9 01:01:32.232950 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 9 01:01:32.233013 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 9 01:01:32.233941 kernel: usbcore: registered new interface driver usbhid Oct 9 01:01:32.233963 kernel: usbhid: USB HID core driver Oct 9 01:01:32.333052 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 9 01:01:32.463981 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 9 01:01:32.517979 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 9 01:01:32.998080 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:01:33.000956 disk-uuid[581]: The operation has completed successfully. Oct 9 01:01:33.044732 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:01:33.045564 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:01:33.070105 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:01:33.079078 sh[599]: Success Oct 9 01:01:33.098964 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 9 01:01:33.161392 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:01:33.164204 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:01:33.168813 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:01:33.187428 kernel: BTRFS info (device dm-0): first mount of filesystem c25b3a2f-539f-42a7-8842-97b35e474647 Oct 9 01:01:33.187540 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:01:33.187569 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:01:33.188782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:01:33.188824 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:01:33.193942 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 9 01:01:33.195697 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:01:33.197170 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:01:33.204113 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:01:33.206074 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:01:33.218631 kernel: BTRFS info (device sda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:01:33.218674 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:01:33.218685 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:01:33.222963 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:01:33.223044 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:01:33.235152 kernel: BTRFS info (device sda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:01:33.235486 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:01:33.240616 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:01:33.249138 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:01:33.338959 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:01:33.345350 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:01:33.385056 ignition[692]: Ignition 2.19.0 Oct 9 01:01:33.385066 ignition[692]: Stage: fetch-offline Oct 9 01:01:33.385105 ignition[692]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:33.385113 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:01:33.385267 ignition[692]: parsed url from cmdline: "" Oct 9 01:01:33.385270 ignition[692]: no config URL provided Oct 9 01:01:33.385274 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:01:33.385280 ignition[692]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:01:33.385285 ignition[692]: failed to fetch config: resource requires networking Oct 9 01:01:33.388971 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:01:33.385460 ignition[692]: Ignition finished successfully Oct 9 01:01:33.390039 systemd-networkd[784]: lo: Link UP Oct 9 01:01:33.390042 systemd-networkd[784]: lo: Gained carrier Oct 9 01:01:33.392901 systemd-networkd[784]: Enumeration completed Oct 9 01:01:33.393447 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:01:33.394458 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:33.394464 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:01:33.394966 systemd[1]: Reached target network.target - Network. Oct 9 01:01:33.399144 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:33.399148 systemd-networkd[784]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:01:33.399736 systemd-networkd[784]: eth0: Link UP Oct 9 01:01:33.399740 systemd-networkd[784]: eth0: Gained carrier Oct 9 01:01:33.399747 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:33.401243 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 01:01:33.402513 systemd-networkd[784]: eth1: Link UP Oct 9 01:01:33.402516 systemd-networkd[784]: eth1: Gained carrier Oct 9 01:01:33.402523 systemd-networkd[784]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:33.416736 ignition[790]: Ignition 2.19.0 Oct 9 01:01:33.416746 ignition[790]: Stage: fetch Oct 9 01:01:33.416913 ignition[790]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:33.416941 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:01:33.417034 ignition[790]: parsed url from cmdline: "" Oct 9 01:01:33.417037 ignition[790]: no config URL provided Oct 9 01:01:33.417042 ignition[790]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:01:33.417049 ignition[790]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:01:33.417131 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 9 01:01:33.417695 ignition[790]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 9 01:01:33.438024 systemd-networkd[784]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:01:33.522025 systemd-networkd[784]: eth0: DHCPv4 address 78.46.183.65/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 01:01:33.617952 ignition[790]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 9 01:01:33.624467 ignition[790]: GET result: OK Oct 9 01:01:33.624691 ignition[790]: parsing config with SHA512: 559176d5fa626f42f9e079049cc29a880ff844cf83482fc5369c91290649e19f2780d1d6bda0a3831c15016d59f27040796518975aa359c74bec36782a3b9a64 Oct 9 01:01:33.633441 unknown[790]: fetched base config from "system" Oct 9 01:01:33.633461 unknown[790]: fetched base config from "system" Oct 9 01:01:33.634291 ignition[790]: fetch: fetch complete Oct 9 01:01:33.633472 unknown[790]: fetched user config from "hetzner" Oct 9 01:01:33.634302 ignition[790]: fetch: fetch passed Oct 9 01:01:33.634763 ignition[790]: Ignition finished successfully Oct 9 01:01:33.637387 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 01:01:33.645119 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:01:33.656957 ignition[797]: Ignition 2.19.0 Oct 9 01:01:33.656968 ignition[797]: Stage: kargs Oct 9 01:01:33.657150 ignition[797]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:33.657162 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:01:33.658198 ignition[797]: kargs: kargs passed Oct 9 01:01:33.660288 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:01:33.658248 ignition[797]: Ignition finished successfully Oct 9 01:01:33.668129 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:01:33.681770 ignition[803]: Ignition 2.19.0 Oct 9 01:01:33.683077 ignition[803]: Stage: disks Oct 9 01:01:33.683497 ignition[803]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:33.683531 ignition[803]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:01:33.685967 ignition[803]: disks: disks passed Oct 9 01:01:33.686083 ignition[803]: Ignition finished successfully Oct 9 01:01:33.687441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:01:33.688759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:01:33.689477 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:01:33.690623 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:01:33.691719 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:01:33.692704 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:01:33.698114 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:01:33.722760 systemd-fsck[812]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 9 01:01:33.726222 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:01:33.734487 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:01:33.787968 kernel: EXT4-fs (sda9): mounted filesystem 3a4adf89-ce2b-46a9-8e1a-433a27a27d16 r/w with ordered data mode. Quota mode: none. Oct 9 01:01:33.789573 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:01:33.791596 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:01:33.804064 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:01:33.807193 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:01:33.809362 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 01:01:33.812766 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:01:33.814129 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:01:33.817664 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:01:33.823987 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (820) Oct 9 01:01:33.828954 kernel: BTRFS info (device sda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:01:33.829021 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:01:33.829042 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:01:33.830321 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:01:33.838606 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:01:33.838679 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:01:33.846837 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:01:33.893506 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:01:33.900264 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:01:33.905434 coreos-metadata[822]: Oct 09 01:01:33.905 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 9 01:01:33.908557 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:01:33.909988 coreos-metadata[822]: Oct 09 01:01:33.908 INFO Fetch successful Oct 9 01:01:33.909988 coreos-metadata[822]: Oct 09 01:01:33.908 INFO wrote hostname ci-4116-0-0-5-47b5cb1617 to /sysroot/etc/hostname Oct 9 01:01:33.910214 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:01:33.915228 initrd-setup-root[869]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:01:34.012674 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:01:34.018018 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:01:34.024661 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:01:34.026538 kernel: BTRFS info (device sda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:01:34.048125 ignition[937]: INFO : Ignition 2.19.0 Oct 9 01:01:34.048125 ignition[937]: INFO : Stage: mount Oct 9 01:01:34.052281 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:34.052281 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:01:34.052281 ignition[937]: INFO : mount: mount passed Oct 9 01:01:34.052281 ignition[937]: INFO : Ignition finished successfully Oct 9 01:01:34.051747 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:01:34.061063 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:01:34.061803 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:01:34.189321 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:01:34.199138 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:01:34.210985 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (949) Oct 9 01:01:34.213112 kernel: BTRFS info (device sda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:01:34.213159 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:01:34.213178 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:01:34.215962 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:01:34.216033 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:01:34.219073 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:01:34.238717 ignition[966]: INFO : Ignition 2.19.0 Oct 9 01:01:34.238717 ignition[966]: INFO : Stage: files Oct 9 01:01:34.239752 ignition[966]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:34.239752 ignition[966]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:01:34.241077 ignition[966]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:01:34.241077 ignition[966]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:01:34.241077 ignition[966]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:01:34.244668 ignition[966]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:01:34.250377 ignition[966]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:01:34.250377 ignition[966]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:01:34.250377 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 01:01:34.250377 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 01:01:34.250377 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:01:34.250377 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 9 01:01:34.245084 unknown[966]: wrote ssh authorized keys file for user: core Oct 9 01:01:34.335897 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 01:01:34.503576 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:01:34.505510 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 9 01:01:34.774157 systemd-networkd[784]: eth1: Gained IPv6LL Oct 9 01:01:35.030282 systemd-networkd[784]: eth0: Gained IPv6LL Oct 9 01:01:35.067956 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 01:01:35.320185 ignition[966]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:01:35.320185 ignition[966]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:01:35.322555 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:01:35.340277 ignition[966]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:01:35.340277 ignition[966]: INFO : files: files passed Oct 9 01:01:35.340277 ignition[966]: INFO : Ignition finished successfully Oct 9 01:01:35.326150 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:01:35.334606 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:01:35.337526 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:01:35.340162 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:01:35.341152 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:01:35.350985 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:01:35.350985 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:01:35.353150 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:01:35.355213 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:01:35.357304 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:01:35.364105 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:01:35.402972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:01:35.403231 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:01:35.406227 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:01:35.408046 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:01:35.409705 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:01:35.420178 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:01:35.434089 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:01:35.441230 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:01:35.464456 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:01:35.466368 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:01:35.468727 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:01:35.469944 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:01:35.470220 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:01:35.472053 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:01:35.473619 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:01:35.474826 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:01:35.476009 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:01:35.477080 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:01:35.478145 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:01:35.479101 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:01:35.480230 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:01:35.481273 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:01:35.482223 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:01:35.483008 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:01:35.483245 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:01:35.484474 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:01:35.485645 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:01:35.486719 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:01:35.487178 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:01:35.487962 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:01:35.488124 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:01:35.489490 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:01:35.489603 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:01:35.490697 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:01:35.490832 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:01:35.491694 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 01:01:35.491900 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:01:35.502529 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:01:35.503096 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:01:35.503314 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:01:35.506163 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:01:35.506678 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:01:35.506855 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:01:35.513186 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:01:35.513586 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:01:35.520036 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:01:35.520126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:01:35.523724 ignition[1018]: INFO : Ignition 2.19.0 Oct 9 01:01:35.525994 ignition[1018]: INFO : Stage: umount Oct 9 01:01:35.525994 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:01:35.525994 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:01:35.525994 ignition[1018]: INFO : umount: umount passed Oct 9 01:01:35.525994 ignition[1018]: INFO : Ignition finished successfully Oct 9 01:01:35.528268 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:01:35.529957 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:01:35.534655 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:01:35.535514 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:01:35.535594 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:01:35.537479 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:01:35.537532 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:01:35.538415 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 01:01:35.538455 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 01:01:35.541044 systemd[1]: Stopped target network.target - Network. Oct 9 01:01:35.542317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:01:35.542373 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:01:35.543760 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:01:35.546214 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:01:35.548256 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:01:35.549234 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:01:35.550280 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:01:35.552502 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:01:35.552578 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:01:35.554586 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:01:35.554656 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:01:35.557829 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:01:35.558006 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:01:35.561964 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:01:35.562052 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:01:35.566214 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:01:35.567675 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:01:35.574764 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:01:35.574904 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:01:35.576244 systemd-networkd[784]: eth0: DHCPv6 lease lost Oct 9 01:01:35.576779 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:01:35.577959 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:01:35.579993 systemd-networkd[784]: eth1: DHCPv6 lease lost Oct 9 01:01:35.581513 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:01:35.581622 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:01:35.582869 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:01:35.582917 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:01:35.584220 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:01:35.585986 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:01:35.587779 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:01:35.587842 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:01:35.594054 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:01:35.594571 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:01:35.594630 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:01:35.595348 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:01:35.595401 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:01:35.596192 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:01:35.596228 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:01:35.598640 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:01:35.613338 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:01:35.614967 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:01:35.616224 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:01:35.616289 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:01:35.617387 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:01:35.617426 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:01:35.618021 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:01:35.618067 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:01:35.619563 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:01:35.619613 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:01:35.620889 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:01:35.620955 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:01:35.629133 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:01:35.629678 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:01:35.629735 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:01:35.630896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:01:35.632988 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:35.633985 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:01:35.634078 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:01:35.639144 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:01:35.639275 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:01:35.640908 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:01:35.646101 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:01:35.659028 systemd[1]: Switching root. Oct 9 01:01:35.697822 systemd-journald[236]: Journal stopped Oct 9 01:01:36.574465 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Oct 9 01:01:36.574538 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:01:36.574551 kernel: SELinux: policy capability open_perms=1 Oct 9 01:01:36.574560 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:01:36.574573 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:01:36.574586 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:01:36.574595 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:01:36.574607 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:01:36.574617 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:01:36.574626 kernel: audit: type=1403 audit(1728435695.885:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:01:36.574636 systemd[1]: Successfully loaded SELinux policy in 40.280ms. Oct 9 01:01:36.574655 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.983ms. Oct 9 01:01:36.574668 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:01:36.574678 systemd[1]: Detected virtualization kvm. Oct 9 01:01:36.574688 systemd[1]: Detected architecture arm64. Oct 9 01:01:36.574698 systemd[1]: Detected first boot. Oct 9 01:01:36.574708 systemd[1]: Hostname set to . Oct 9 01:01:36.574718 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:01:36.574729 zram_generator::config[1082]: No configuration found. Oct 9 01:01:36.574740 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:01:36.574751 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:01:36.574762 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 9 01:01:36.574772 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:01:36.574783 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:01:36.574793 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:01:36.574802 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:01:36.574812 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:01:36.574823 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:01:36.574835 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:01:36.574847 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:01:36.574857 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:01:36.574868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:01:36.574878 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:01:36.574888 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:01:36.574898 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:01:36.574908 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:01:36.574918 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 9 01:01:36.574943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:01:36.574955 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:01:36.574965 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:01:36.574980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:01:36.574990 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:01:36.575000 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:01:36.575010 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:01:36.575021 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:01:36.575031 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:01:36.575041 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:01:36.575052 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:01:36.575061 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:01:36.575071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:01:36.575083 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:01:36.575094 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:01:36.575104 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:01:36.575116 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:01:36.575126 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:01:36.575136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:01:36.575149 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:01:36.575214 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:01:36.575227 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:01:36.575241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:01:36.575254 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:01:36.575268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:01:36.575279 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:01:36.575289 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:01:36.575300 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:01:36.575311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:01:36.575321 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:01:36.575334 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 01:01:36.575345 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 01:01:36.575355 kernel: fuse: init (API version 7.39) Oct 9 01:01:36.575365 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:01:36.575377 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:01:36.575387 kernel: loop: module loaded Oct 9 01:01:36.575397 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:01:36.575407 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:01:36.575419 kernel: ACPI: bus type drm_connector registered Oct 9 01:01:36.575429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:01:36.575462 systemd-journald[1164]: Collecting audit messages is disabled. Oct 9 01:01:36.575484 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:01:36.575495 systemd-journald[1164]: Journal started Oct 9 01:01:36.575516 systemd-journald[1164]: Runtime Journal (/run/log/journal/32d5ec6e1c9348baa4a6653788d6bb45) is 8.0M, max 76.5M, 68.5M free. Oct 9 01:01:36.580142 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:01:36.582586 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:01:36.583329 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:01:36.585112 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:01:36.585758 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:01:36.586909 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:01:36.588318 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:01:36.590312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:01:36.591247 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:01:36.591397 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:01:36.592218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:01:36.592356 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:01:36.593395 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:01:36.593548 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:01:36.596488 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:01:36.596635 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:01:36.598322 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:01:36.598465 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:01:36.600011 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:01:36.600189 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:01:36.601191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:01:36.602088 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:01:36.603328 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:01:36.613535 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:01:36.621141 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:01:36.634070 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:01:36.634663 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:01:36.640104 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:01:36.650501 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:01:36.651507 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:01:36.663568 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:01:36.664598 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:01:36.670144 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:01:36.672530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:01:36.693194 systemd-journald[1164]: Time spent on flushing to /var/log/journal/32d5ec6e1c9348baa4a6653788d6bb45 is 29.418ms for 1110 entries. Oct 9 01:01:36.693194 systemd-journald[1164]: System Journal (/var/log/journal/32d5ec6e1c9348baa4a6653788d6bb45) is 8.0M, max 584.8M, 576.8M free. Oct 9 01:01:36.734389 systemd-journald[1164]: Received client request to flush runtime journal. Oct 9 01:01:36.695516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:01:36.699743 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:01:36.701375 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:01:36.703555 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:01:36.712647 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:01:36.720742 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:01:36.740417 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:01:36.746286 udevadm[1223]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 9 01:01:36.756205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:01:36.761265 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Oct 9 01:01:36.761287 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Oct 9 01:01:36.769400 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:01:36.778363 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:01:36.811344 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:01:36.816182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:01:36.831130 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Oct 9 01:01:36.831148 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Oct 9 01:01:36.836346 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:01:37.200269 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:01:37.209122 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:01:37.247403 systemd-udevd[1244]: Using default interface naming scheme 'v255'. Oct 9 01:01:37.270676 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:01:37.286101 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:01:37.331456 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:01:37.350434 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Oct 9 01:01:37.384067 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:01:37.391987 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1250) Oct 9 01:01:37.402005 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1250) Oct 9 01:01:37.445319 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:01:37.478487 systemd-networkd[1255]: lo: Link UP Oct 9 01:01:37.478499 systemd-networkd[1255]: lo: Gained carrier Oct 9 01:01:37.480784 systemd-networkd[1255]: Enumeration completed Oct 9 01:01:37.481911 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:01:37.482464 systemd-networkd[1255]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:37.482468 systemd-networkd[1255]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:01:37.483254 systemd-networkd[1255]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:37.483265 systemd-networkd[1255]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:01:37.483792 systemd-networkd[1255]: eth0: Link UP Oct 9 01:01:37.483796 systemd-networkd[1255]: eth0: Gained carrier Oct 9 01:01:37.483808 systemd-networkd[1255]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:37.493330 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:01:37.496505 systemd-networkd[1255]: eth1: Link UP Oct 9 01:01:37.496509 systemd-networkd[1255]: eth1: Gained carrier Oct 9 01:01:37.496529 systemd-networkd[1255]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:01:37.515953 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1250) Oct 9 01:01:37.542114 systemd-networkd[1255]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:01:37.570971 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 9 01:01:37.571038 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 01:01:37.571052 kernel: [drm] features: -context_init Oct 9 01:01:37.577953 kernel: [drm] number of scanouts: 1 Oct 9 01:01:37.578530 kernel: [drm] number of cap sets: 0 Oct 9 01:01:37.586946 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 9 01:01:37.599856 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 01:01:37.603471 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 01:01:37.615021 systemd-networkd[1255]: eth0: DHCPv4 address 78.46.183.65/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 01:01:37.617067 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 01:01:37.626317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:37.635520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:01:37.635862 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:37.644210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:01:37.712653 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:01:37.759364 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:01:37.765100 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:01:37.782995 lvm[1306]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:01:37.805418 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:01:37.807445 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:01:37.821218 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:01:37.825351 lvm[1312]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:01:37.852258 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:01:37.854578 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:01:37.855803 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:01:37.855837 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:01:37.856730 systemd[1]: Reached target machines.target - Containers. Oct 9 01:01:37.858638 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:01:37.864098 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:01:37.867122 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:01:37.868146 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:01:37.870237 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:01:37.880251 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:01:37.886016 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:01:37.887649 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:01:37.905331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:01:37.914216 kernel: loop0: detected capacity change from 0 to 113456 Oct 9 01:01:37.918719 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:01:37.920114 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:01:37.929278 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:01:37.945963 kernel: loop1: detected capacity change from 0 to 116808 Oct 9 01:01:37.977009 kernel: loop2: detected capacity change from 0 to 8 Oct 9 01:01:37.995974 kernel: loop3: detected capacity change from 0 to 194512 Oct 9 01:01:38.037042 kernel: loop4: detected capacity change from 0 to 113456 Oct 9 01:01:38.051951 kernel: loop5: detected capacity change from 0 to 116808 Oct 9 01:01:38.069226 kernel: loop6: detected capacity change from 0 to 8 Oct 9 01:01:38.070981 kernel: loop7: detected capacity change from 0 to 194512 Oct 9 01:01:38.086702 (sd-merge)[1334]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 9 01:01:38.087431 (sd-merge)[1334]: Merged extensions into '/usr'. Oct 9 01:01:38.091800 systemd[1]: Reloading requested from client PID 1320 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:01:38.091819 systemd[1]: Reloading... Oct 9 01:01:38.156112 zram_generator::config[1365]: No configuration found. Oct 9 01:01:38.285708 ldconfig[1316]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:01:38.293815 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:01:38.356537 systemd[1]: Reloading finished in 264 ms. Oct 9 01:01:38.376296 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:01:38.377777 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:01:38.387101 systemd[1]: Starting ensure-sysext.service... Oct 9 01:01:38.391128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:01:38.397124 systemd[1]: Reloading requested from client PID 1406 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:01:38.397143 systemd[1]: Reloading... Oct 9 01:01:38.426015 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:01:38.426286 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:01:38.426971 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:01:38.428182 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Oct 9 01:01:38.428243 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Oct 9 01:01:38.431772 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:01:38.431783 systemd-tmpfiles[1407]: Skipping /boot Oct 9 01:01:38.442332 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:01:38.442347 systemd-tmpfiles[1407]: Skipping /boot Oct 9 01:01:38.484952 zram_generator::config[1434]: No configuration found. Oct 9 01:01:38.585734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:01:38.644908 systemd[1]: Reloading finished in 247 ms. Oct 9 01:01:38.665116 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:01:38.678235 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:01:38.683126 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:01:38.687105 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:01:38.692118 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:01:38.703194 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:01:38.713078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:01:38.716203 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:01:38.722380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:01:38.728225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:01:38.732255 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:01:38.735597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:01:38.735763 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:01:38.742284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:01:38.742459 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:01:38.763017 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:01:38.763229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:01:38.771767 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:01:38.775792 systemd[1]: Finished ensure-sysext.service. Oct 9 01:01:38.779852 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:01:38.788748 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:01:38.797490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:01:38.810477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:01:38.816113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:01:38.823438 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:01:38.836119 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:01:38.841347 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:01:38.842563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:01:38.842726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:01:38.848583 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:01:38.849532 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:01:38.856239 systemd-resolved[1484]: Positive Trust Anchors: Oct 9 01:01:38.856315 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:01:38.856347 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:01:38.859521 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:01:38.860517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:01:38.860655 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:01:38.872781 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:01:38.876416 systemd-resolved[1484]: Using system hostname 'ci-4116-0-0-5-47b5cb1617'. Oct 9 01:01:38.881483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:01:38.882322 systemd[1]: Reached target network.target - Network. Oct 9 01:01:38.882775 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:01:38.883876 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:01:38.883963 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:01:38.883990 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:01:38.886378 augenrules[1539]: No rules Oct 9 01:01:38.889138 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:01:38.889697 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:01:38.931227 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:01:38.932388 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:01:38.933617 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:01:38.936814 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:01:38.937737 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:01:38.938603 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:01:38.938639 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:01:38.939397 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:01:38.940241 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:01:38.941064 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:01:38.942054 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:01:38.943906 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:01:38.946314 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:01:38.949317 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:01:38.951556 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:01:38.952839 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:01:38.954099 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:01:38.955520 systemd[1]: System is tainted: cgroupsv1 Oct 9 01:01:38.955743 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:01:38.955914 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:01:38.958076 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:01:38.963242 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 01:01:38.968097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:01:38.973803 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:01:38.979118 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:01:38.984435 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:01:38.987621 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:01:38.994080 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:01:39.001170 jq[1553]: false Oct 9 01:01:39.004464 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:01:39.017174 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:01:39.025008 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:01:39.036227 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:01:39.037361 dbus-daemon[1551]: [system] SELinux support is enabled Oct 9 01:01:39.046328 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:01:39.048465 coreos-metadata[1550]: Oct 09 01:01:39.047 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 9 01:01:39.051218 coreos-metadata[1550]: Oct 09 01:01:39.050 INFO Fetch successful Oct 9 01:01:39.057080 coreos-metadata[1550]: Oct 09 01:01:39.052 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 9 01:01:39.057080 coreos-metadata[1550]: Oct 09 01:01:39.054 INFO Fetch successful Oct 9 01:01:39.052724 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:01:39.055369 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:01:39.069153 extend-filesystems[1554]: Found loop4 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found loop5 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found loop6 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found loop7 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda1 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda2 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda3 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found usr Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda4 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda6 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda7 Oct 9 01:01:39.069153 extend-filesystems[1554]: Found sda9 Oct 9 01:01:39.069153 extend-filesystems[1554]: Checking size of /dev/sda9 Oct 9 01:01:39.080524 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:01:39.080814 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:01:39.091201 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:01:39.091732 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:01:39.102418 jq[1573]: true Oct 9 01:01:39.106413 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:01:39.106662 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:01:39.131560 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:01:39.144449 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:01:39.144505 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:01:39.148438 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:01:39.148512 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:01:39.154943 extend-filesystems[1554]: Resized partition /dev/sda9 Oct 9 01:01:39.161695 jq[1587]: true Oct 9 01:01:39.176466 extend-filesystems[1604]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:01:39.189211 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 9 01:01:39.189271 update_engine[1569]: I20241009 01:01:39.165459 1569 main.cc:92] Flatcar Update Engine starting Oct 9 01:01:39.189271 update_engine[1569]: I20241009 01:01:39.185837 1569 update_check_scheduler.cc:74] Next update check in 2m24s Oct 9 01:01:39.164413 systemd-timesyncd[1522]: Contacted time server 213.209.109.44:123 (0.flatcar.pool.ntp.org). Oct 9 01:01:39.164468 systemd-timesyncd[1522]: Initial clock synchronization to Wed 2024-10-09 01:01:39.185944 UTC. Oct 9 01:01:39.192775 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:01:39.196755 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:01:39.204165 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:01:39.206300 tar[1581]: linux-arm64/helm Oct 9 01:01:39.254533 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 01:01:39.254905 systemd-networkd[1255]: eth1: Gained IPv6LL Oct 9 01:01:39.255578 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:01:39.275224 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:01:39.278100 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:01:39.285004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:39.287083 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:01:39.329712 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1247) Oct 9 01:01:39.362391 systemd-logind[1565]: New seat seat0. Oct 9 01:01:39.365062 systemd-logind[1565]: Watching system buttons on /dev/input/event0 (Power Button) Oct 9 01:01:39.365088 systemd-logind[1565]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 9 01:01:39.365550 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:01:39.388342 bash[1638]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:01:39.393165 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:01:39.412949 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 9 01:01:39.423654 systemd[1]: Starting sshkeys.service... Oct 9 01:01:39.443026 extend-filesystems[1604]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 9 01:01:39.443026 extend-filesystems[1604]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 9 01:01:39.443026 extend-filesystems[1604]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 9 01:01:39.436354 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:01:39.449269 extend-filesystems[1554]: Resized filesystem in /dev/sda9 Oct 9 01:01:39.449269 extend-filesystems[1554]: Found sr0 Oct 9 01:01:39.440318 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:01:39.440581 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:01:39.452827 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 01:01:39.459316 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 01:01:39.498530 coreos-metadata[1662]: Oct 09 01:01:39.497 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 9 01:01:39.501059 coreos-metadata[1662]: Oct 09 01:01:39.500 INFO Fetch successful Oct 9 01:01:39.504164 unknown[1662]: wrote ssh authorized keys file for user: core Oct 9 01:01:39.511606 systemd-networkd[1255]: eth0: Gained IPv6LL Oct 9 01:01:39.526309 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:01:39.537086 update-ssh-keys[1667]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:01:39.541569 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 01:01:39.552846 systemd[1]: Finished sshkeys.service. Oct 9 01:01:39.756392 containerd[1589]: time="2024-10-09T01:01:39.755907160Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:01:39.814915 containerd[1589]: time="2024-10-09T01:01:39.814855000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:39.817777 containerd[1589]: time="2024-10-09T01:01:39.817726760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:39.817777 containerd[1589]: time="2024-10-09T01:01:39.817772920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:01:39.817872 containerd[1589]: time="2024-10-09T01:01:39.817793240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:01:39.818187 containerd[1589]: time="2024-10-09T01:01:39.817976600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:01:39.818187 containerd[1589]: time="2024-10-09T01:01:39.817997280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:39.818187 containerd[1589]: time="2024-10-09T01:01:39.818051920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:39.818187 containerd[1589]: time="2024-10-09T01:01:39.818065240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:39.818298 containerd[1589]: time="2024-10-09T01:01:39.818283240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:39.818320 containerd[1589]: time="2024-10-09T01:01:39.818298240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:39.818320 containerd[1589]: time="2024-10-09T01:01:39.818311200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:39.818358 containerd[1589]: time="2024-10-09T01:01:39.818320520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:39.818405 containerd[1589]: time="2024-10-09T01:01:39.818384760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:39.819115 containerd[1589]: time="2024-10-09T01:01:39.818569880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:01:39.819115 containerd[1589]: time="2024-10-09T01:01:39.818692240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:01:39.819115 containerd[1589]: time="2024-10-09T01:01:39.818705160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:01:39.819115 containerd[1589]: time="2024-10-09T01:01:39.818771720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:01:39.819115 containerd[1589]: time="2024-10-09T01:01:39.818808120Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:01:39.824498 containerd[1589]: time="2024-10-09T01:01:39.824453880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:01:39.824577 containerd[1589]: time="2024-10-09T01:01:39.824520080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:01:39.824577 containerd[1589]: time="2024-10-09T01:01:39.824536120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:01:39.824577 containerd[1589]: time="2024-10-09T01:01:39.824551080Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:01:39.824577 containerd[1589]: time="2024-10-09T01:01:39.824566640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:01:39.824744 containerd[1589]: time="2024-10-09T01:01:39.824722960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:01:39.826665 containerd[1589]: time="2024-10-09T01:01:39.826626600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:01:39.826797 containerd[1589]: time="2024-10-09T01:01:39.826777920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:01:39.826840 containerd[1589]: time="2024-10-09T01:01:39.826798720Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:01:39.826840 containerd[1589]: time="2024-10-09T01:01:39.826819240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:01:39.826840 containerd[1589]: time="2024-10-09T01:01:39.826833040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.826907 containerd[1589]: time="2024-10-09T01:01:39.826845480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.826907 containerd[1589]: time="2024-10-09T01:01:39.826858880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.826907 containerd[1589]: time="2024-10-09T01:01:39.826873160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.826907 containerd[1589]: time="2024-10-09T01:01:39.826888080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.826907 containerd[1589]: time="2024-10-09T01:01:39.826900360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.827009 containerd[1589]: time="2024-10-09T01:01:39.826913960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.827009 containerd[1589]: time="2024-10-09T01:01:39.826941120Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:01:39.827009 containerd[1589]: time="2024-10-09T01:01:39.826962240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827009 containerd[1589]: time="2024-10-09T01:01:39.826975880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827009 containerd[1589]: time="2024-10-09T01:01:39.826988480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827009 containerd[1589]: time="2024-10-09T01:01:39.827001400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827012880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827026280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827038040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827050400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827064400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827079440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827091400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827104200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827117 containerd[1589]: time="2024-10-09T01:01:39.827115520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827313 containerd[1589]: time="2024-10-09T01:01:39.827130640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:01:39.827313 containerd[1589]: time="2024-10-09T01:01:39.827165720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827313 containerd[1589]: time="2024-10-09T01:01:39.827180680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827313 containerd[1589]: time="2024-10-09T01:01:39.827191160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:01:39.827313 containerd[1589]: time="2024-10-09T01:01:39.827302560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:01:39.827396 containerd[1589]: time="2024-10-09T01:01:39.827320160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:01:39.827396 containerd[1589]: time="2024-10-09T01:01:39.827330640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:01:39.827396 containerd[1589]: time="2024-10-09T01:01:39.827341960Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:01:39.827396 containerd[1589]: time="2024-10-09T01:01:39.827350880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.827396 containerd[1589]: time="2024-10-09T01:01:39.827364440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:01:39.827396 containerd[1589]: time="2024-10-09T01:01:39.827375400Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:01:39.827396 containerd[1589]: time="2024-10-09T01:01:39.827388920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:01:39.830795 containerd[1589]: time="2024-10-09T01:01:39.827733360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:01:39.830795 containerd[1589]: time="2024-10-09T01:01:39.827786600Z" level=info msg="Connect containerd service" Oct 9 01:01:39.830795 containerd[1589]: time="2024-10-09T01:01:39.827818920Z" level=info msg="using legacy CRI server" Oct 9 01:01:39.830795 containerd[1589]: time="2024-10-09T01:01:39.827825600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:01:39.830795 containerd[1589]: time="2024-10-09T01:01:39.827906920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:01:39.832088 containerd[1589]: time="2024-10-09T01:01:39.831574080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:01:39.834047 containerd[1589]: time="2024-10-09T01:01:39.833587280Z" level=info msg="Start subscribing containerd event" Oct 9 01:01:39.834047 containerd[1589]: time="2024-10-09T01:01:39.833652520Z" level=info msg="Start recovering state" Oct 9 01:01:39.834047 containerd[1589]: time="2024-10-09T01:01:39.833731760Z" level=info msg="Start event monitor" Oct 9 01:01:39.834047 containerd[1589]: time="2024-10-09T01:01:39.833751600Z" level=info msg="Start snapshots syncer" Oct 9 01:01:39.834047 containerd[1589]: time="2024-10-09T01:01:39.833762840Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:01:39.834047 containerd[1589]: time="2024-10-09T01:01:39.833770280Z" level=info msg="Start streaming server" Oct 9 01:01:39.834201 containerd[1589]: time="2024-10-09T01:01:39.834064520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:01:39.834201 containerd[1589]: time="2024-10-09T01:01:39.834121000Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:01:39.834915 containerd[1589]: time="2024-10-09T01:01:39.834888680Z" level=info msg="containerd successfully booted in 0.080757s" Oct 9 01:01:39.835029 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:01:39.996209 tar[1581]: linux-arm64/LICENSE Oct 9 01:01:39.996353 tar[1581]: linux-arm64/README.md Oct 9 01:01:40.017281 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:01:40.216284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:40.221193 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:01:40.815545 kubelet[1691]: E1009 01:01:40.815448 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:01:40.817720 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:01:40.817854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:01:40.866079 sshd_keygen[1572]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:01:40.892257 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:01:40.901904 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:01:40.912186 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:01:40.912722 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:01:40.918269 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:01:40.931856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:01:40.940677 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:01:40.947315 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 9 01:01:40.949047 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:01:40.950282 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:01:40.951351 systemd[1]: Startup finished in 5.966s (kernel) + 5.105s (userspace) = 11.072s. Oct 9 01:01:51.068376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:01:51.075306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:01:51.194224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:01:51.209414 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:01:51.261879 kubelet[1740]: E1009 01:01:51.261803 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:01:51.266468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:01:51.266684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:02:01.300869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:02:01.312564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:02:01.435171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:02:01.435474 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:02:01.497072 kubelet[1761]: E1009 01:02:01.496997 1761 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:02:01.499978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:02:01.500146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:02:01.740565 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:02:01.747319 systemd[1]: Started sshd@0-78.46.183.65:22-80.64.30.139:41250.service - OpenSSH per-connection server daemon (80.64.30.139:41250). Oct 9 01:02:02.389807 sshd[1770]: Invalid user admin from 80.64.30.139 port 41250 Oct 9 01:02:02.458061 sshd[1770]: Connection closed by invalid user admin 80.64.30.139 port 41250 [preauth] Oct 9 01:02:02.461492 systemd[1]: sshd@0-78.46.183.65:22-80.64.30.139:41250.service: Deactivated successfully. Oct 9 01:02:11.550671 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 01:02:11.563235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:02:11.700133 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:02:11.705624 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:02:11.758791 kubelet[1788]: E1009 01:02:11.758720 1788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:02:11.761512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:02:11.761669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:02:21.801061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 01:02:21.817327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:02:21.933127 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:02:21.937508 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:02:21.986467 kubelet[1809]: E1009 01:02:21.986378 1809 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:02:21.991633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:02:21.992800 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:02:24.926043 update_engine[1569]: I20241009 01:02:24.925199 1569 update_attempter.cc:509] Updating boot flags... Oct 9 01:02:24.981957 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1828) Oct 9 01:02:25.035461 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1827) Oct 9 01:02:32.050844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 9 01:02:32.058113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:02:32.178120 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:02:32.183250 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:02:32.233947 kubelet[1849]: E1009 01:02:32.233851 1849 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:02:32.238201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:02:32.238440 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:02:42.300600 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 9 01:02:42.308193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:02:42.451198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:02:42.462333 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:02:42.513711 kubelet[1871]: E1009 01:02:42.513592 1871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:02:42.517531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:02:42.518413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:02:52.550717 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 9 01:02:52.564715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:02:52.687206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:02:52.693236 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:02:52.743039 kubelet[1892]: E1009 01:02:52.742991 1892 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:02:52.746148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:02:52.746642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:03:02.800806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 9 01:03:02.818293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:03:02.935170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:03:02.940486 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:03:02.999460 kubelet[1913]: E1009 01:03:02.999372 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:03:03.002369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:03:03.002530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:03:13.050459 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 9 01:03:13.057137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:03:13.194109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:03:13.204338 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:03:13.260724 kubelet[1934]: E1009 01:03:13.260658 1934 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:03:13.264172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:03:13.264600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:03:23.300646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 9 01:03:23.316196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:03:23.443394 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:03:23.443592 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:03:23.492180 kubelet[1955]: E1009 01:03:23.492115 1955 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:03:23.494864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:03:23.495052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:03:30.861631 systemd[1]: Started sshd@1-78.46.183.65:22-147.75.109.163:52506.service - OpenSSH per-connection server daemon (147.75.109.163:52506). Oct 9 01:03:31.863625 sshd[1965]: Accepted publickey for core from 147.75.109.163 port 52506 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:03:31.866397 sshd[1965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:31.887704 systemd-logind[1565]: New session 1 of user core. Oct 9 01:03:31.889246 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:03:31.897189 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:03:31.917494 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:03:31.928351 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:03:31.933162 (systemd)[1971]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:03:32.030799 systemd[1971]: Queued start job for default target default.target. Oct 9 01:03:32.031203 systemd[1971]: Created slice app.slice - User Application Slice. Oct 9 01:03:32.031220 systemd[1971]: Reached target paths.target - Paths. Oct 9 01:03:32.031231 systemd[1971]: Reached target timers.target - Timers. Oct 9 01:03:32.041088 systemd[1971]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:03:32.051676 systemd[1971]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:03:32.051810 systemd[1971]: Reached target sockets.target - Sockets. Oct 9 01:03:32.051838 systemd[1971]: Reached target basic.target - Basic System. Oct 9 01:03:32.051919 systemd[1971]: Reached target default.target - Main User Target. Oct 9 01:03:32.051996 systemd[1971]: Startup finished in 112ms. Oct 9 01:03:32.052039 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:03:32.056319 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:03:32.752333 systemd[1]: Started sshd@2-78.46.183.65:22-147.75.109.163:52514.service - OpenSSH per-connection server daemon (147.75.109.163:52514). Oct 9 01:03:33.550779 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 9 01:03:33.561153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:03:33.685270 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:03:33.689536 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:03:33.739425 kubelet[1997]: E1009 01:03:33.739360 1997 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:03:33.744150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:03:33.744322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:03:33.762154 sshd[1983]: Accepted publickey for core from 147.75.109.163 port 52514 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:03:33.764177 sshd[1983]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:33.771132 systemd-logind[1565]: New session 2 of user core. Oct 9 01:03:33.778391 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:03:34.452448 sshd[1983]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:34.458103 systemd-logind[1565]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:03:34.459433 systemd[1]: sshd@2-78.46.183.65:22-147.75.109.163:52514.service: Deactivated successfully. Oct 9 01:03:34.464332 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:03:34.465508 systemd-logind[1565]: Removed session 2. Oct 9 01:03:34.620294 systemd[1]: Started sshd@3-78.46.183.65:22-147.75.109.163:52522.service - OpenSSH per-connection server daemon (147.75.109.163:52522). Oct 9 01:03:35.618122 sshd[2012]: Accepted publickey for core from 147.75.109.163 port 52522 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:03:35.620090 sshd[2012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:35.626862 systemd-logind[1565]: New session 3 of user core. Oct 9 01:03:35.633279 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:03:36.307210 sshd[2012]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:36.312631 systemd[1]: sshd@3-78.46.183.65:22-147.75.109.163:52522.service: Deactivated successfully. Oct 9 01:03:36.318196 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:03:36.319527 systemd-logind[1565]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:03:36.320817 systemd-logind[1565]: Removed session 3. Oct 9 01:03:36.488370 systemd[1]: Started sshd@4-78.46.183.65:22-147.75.109.163:52524.service - OpenSSH per-connection server daemon (147.75.109.163:52524). Oct 9 01:03:37.482596 sshd[2020]: Accepted publickey for core from 147.75.109.163 port 52524 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:03:37.485114 sshd[2020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:37.490963 systemd-logind[1565]: New session 4 of user core. Oct 9 01:03:37.500266 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:03:38.174345 sshd[2020]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:38.180257 systemd-logind[1565]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:03:38.180495 systemd[1]: sshd@4-78.46.183.65:22-147.75.109.163:52524.service: Deactivated successfully. Oct 9 01:03:38.185654 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:03:38.187058 systemd-logind[1565]: Removed session 4. Oct 9 01:03:38.343432 systemd[1]: Started sshd@5-78.46.183.65:22-147.75.109.163:50784.service - OpenSSH per-connection server daemon (147.75.109.163:50784). Oct 9 01:03:39.348977 sshd[2028]: Accepted publickey for core from 147.75.109.163 port 50784 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:03:39.351143 sshd[2028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:39.359606 systemd-logind[1565]: New session 5 of user core. Oct 9 01:03:39.367365 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:03:39.900827 sudo[2032]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:03:39.901137 sudo[2032]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:03:39.918263 sudo[2032]: pam_unix(sudo:session): session closed for user root Oct 9 01:03:40.083348 sshd[2028]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:40.089184 systemd[1]: sshd@5-78.46.183.65:22-147.75.109.163:50784.service: Deactivated successfully. Oct 9 01:03:40.093214 systemd-logind[1565]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:03:40.093862 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:03:40.095549 systemd-logind[1565]: Removed session 5. Oct 9 01:03:40.252279 systemd[1]: Started sshd@6-78.46.183.65:22-147.75.109.163:50800.service - OpenSSH per-connection server daemon (147.75.109.163:50800). Oct 9 01:03:41.250073 sshd[2037]: Accepted publickey for core from 147.75.109.163 port 50800 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:03:41.251858 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:41.257555 systemd-logind[1565]: New session 6 of user core. Oct 9 01:03:41.266215 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:03:41.779602 sudo[2042]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:03:41.780334 sudo[2042]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:03:41.784672 sudo[2042]: pam_unix(sudo:session): session closed for user root Oct 9 01:03:41.790812 sudo[2041]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:03:41.791327 sudo[2041]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:03:41.808533 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:03:41.846399 augenrules[2064]: No rules Oct 9 01:03:41.848167 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:03:41.848425 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:03:41.851212 sudo[2041]: pam_unix(sudo:session): session closed for user root Oct 9 01:03:42.014329 sshd[2037]: pam_unix(sshd:session): session closed for user core Oct 9 01:03:42.019524 systemd-logind[1565]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:03:42.021292 systemd[1]: sshd@6-78.46.183.65:22-147.75.109.163:50800.service: Deactivated successfully. Oct 9 01:03:42.025008 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:03:42.027810 systemd-logind[1565]: Removed session 6. Oct 9 01:03:42.192284 systemd[1]: Started sshd@7-78.46.183.65:22-147.75.109.163:50802.service - OpenSSH per-connection server daemon (147.75.109.163:50802). Oct 9 01:03:43.187394 sshd[2073]: Accepted publickey for core from 147.75.109.163 port 50802 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:03:43.190225 sshd[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:03:43.197732 systemd-logind[1565]: New session 7 of user core. Oct 9 01:03:43.210464 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:03:43.720876 sudo[2077]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:03:43.721200 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:03:43.800532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 9 01:03:43.809281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:03:43.936410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:03:43.949341 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:03:44.011138 kubelet[2105]: E1009 01:03:44.011011 2105 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:03:44.020049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:03:44.020162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:03:44.102250 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:03:44.104666 (dockerd)[2116]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:03:44.372003 dockerd[2116]: time="2024-10-09T01:03:44.371914028Z" level=info msg="Starting up" Oct 9 01:03:44.479639 dockerd[2116]: time="2024-10-09T01:03:44.479586578Z" level=info msg="Loading containers: start." Oct 9 01:03:44.642955 kernel: Initializing XFRM netlink socket Oct 9 01:03:44.743117 systemd-networkd[1255]: docker0: Link UP Oct 9 01:03:44.771935 dockerd[2116]: time="2024-10-09T01:03:44.771852615Z" level=info msg="Loading containers: done." Oct 9 01:03:44.791994 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1065440787-merged.mount: Deactivated successfully. Oct 9 01:03:44.795705 dockerd[2116]: time="2024-10-09T01:03:44.795640759Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:03:44.795856 dockerd[2116]: time="2024-10-09T01:03:44.795746359Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:03:44.795856 dockerd[2116]: time="2024-10-09T01:03:44.795849040Z" level=info msg="Daemon has completed initialization" Oct 9 01:03:44.838146 dockerd[2116]: time="2024-10-09T01:03:44.837990944Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:03:44.838211 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:03:45.901662 containerd[1589]: time="2024-10-09T01:03:45.901346002Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 01:03:46.545382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425761474.mount: Deactivated successfully. Oct 9 01:03:47.738996 containerd[1589]: time="2024-10-09T01:03:47.738944957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:47.740953 containerd[1589]: time="2024-10-09T01:03:47.740801124Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286150" Oct 9 01:03:47.742283 containerd[1589]: time="2024-10-09T01:03:47.742185690Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:47.744734 containerd[1589]: time="2024-10-09T01:03:47.744680460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:47.746355 containerd[1589]: time="2024-10-09T01:03:47.746171267Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 1.844771825s" Oct 9 01:03:47.746355 containerd[1589]: time="2024-10-09T01:03:47.746212627Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 9 01:03:47.768506 containerd[1589]: time="2024-10-09T01:03:47.768462719Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 01:03:49.480992 containerd[1589]: time="2024-10-09T01:03:49.480949060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:49.483317 containerd[1589]: time="2024-10-09T01:03:49.483274629Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374224" Oct 9 01:03:49.484165 containerd[1589]: time="2024-10-09T01:03:49.484111992Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:49.487050 containerd[1589]: time="2024-10-09T01:03:49.487012164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:49.488256 containerd[1589]: time="2024-10-09T01:03:49.488225449Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.71971989s" Oct 9 01:03:49.488362 containerd[1589]: time="2024-10-09T01:03:49.488345729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 9 01:03:49.517286 containerd[1589]: time="2024-10-09T01:03:49.517232365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 01:03:50.846513 containerd[1589]: time="2024-10-09T01:03:50.845162450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:50.848178 containerd[1589]: time="2024-10-09T01:03:50.848125941Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751237" Oct 9 01:03:50.849269 containerd[1589]: time="2024-10-09T01:03:50.849209626Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:50.853265 containerd[1589]: time="2024-10-09T01:03:50.853225802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:50.855641 containerd[1589]: time="2024-10-09T01:03:50.855579651Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.338284845s" Oct 9 01:03:50.855641 containerd[1589]: time="2024-10-09T01:03:50.855637851Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 9 01:03:50.886081 containerd[1589]: time="2024-10-09T01:03:50.886028771Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 01:03:52.178377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183939220.mount: Deactivated successfully. Oct 9 01:03:52.463341 containerd[1589]: time="2024-10-09T01:03:52.463181214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:52.464856 containerd[1589]: time="2024-10-09T01:03:52.464768300Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254064" Oct 9 01:03:52.465884 containerd[1589]: time="2024-10-09T01:03:52.465832024Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:52.469220 containerd[1589]: time="2024-10-09T01:03:52.469170117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:52.469872 containerd[1589]: time="2024-10-09T01:03:52.469731079Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.583663668s" Oct 9 01:03:52.469872 containerd[1589]: time="2024-10-09T01:03:52.469763799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 9 01:03:52.493560 containerd[1589]: time="2024-10-09T01:03:52.493519890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:03:53.061943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875024312.mount: Deactivated successfully. Oct 9 01:03:53.924024 containerd[1589]: time="2024-10-09T01:03:53.923953844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:53.925097 containerd[1589]: time="2024-10-09T01:03:53.925052568Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Oct 9 01:03:53.926131 containerd[1589]: time="2024-10-09T01:03:53.926083012Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:53.929286 containerd[1589]: time="2024-10-09T01:03:53.929240784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:53.931112 containerd[1589]: time="2024-10-09T01:03:53.930968350Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.437194379s" Oct 9 01:03:53.931112 containerd[1589]: time="2024-10-09T01:03:53.931006991Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 9 01:03:53.950995 containerd[1589]: time="2024-10-09T01:03:53.950955826Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:03:54.050673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 9 01:03:54.057155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:03:54.174184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:03:54.178753 (kubelet)[2459]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:03:54.233060 kubelet[2459]: E1009 01:03:54.233006 2459 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:03:54.236849 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:03:54.237046 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:03:54.513497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount507267928.mount: Deactivated successfully. Oct 9 01:03:54.520532 containerd[1589]: time="2024-10-09T01:03:54.520485349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:54.521508 containerd[1589]: time="2024-10-09T01:03:54.521462592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Oct 9 01:03:54.522479 containerd[1589]: time="2024-10-09T01:03:54.522376516Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:54.524983 containerd[1589]: time="2024-10-09T01:03:54.524909845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:54.525900 containerd[1589]: time="2024-10-09T01:03:54.525770488Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 574.773462ms" Oct 9 01:03:54.525900 containerd[1589]: time="2024-10-09T01:03:54.525806809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 9 01:03:54.554045 containerd[1589]: time="2024-10-09T01:03:54.553983473Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 01:03:55.124341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1779223818.mount: Deactivated successfully. Oct 9 01:03:56.748892 containerd[1589]: time="2024-10-09T01:03:56.748832470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:56.751459 containerd[1589]: time="2024-10-09T01:03:56.751405879Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Oct 9 01:03:56.752819 containerd[1589]: time="2024-10-09T01:03:56.752780164Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:56.756206 containerd[1589]: time="2024-10-09T01:03:56.756150737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:03:56.759812 containerd[1589]: time="2024-10-09T01:03:56.758460545Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.204419311s" Oct 9 01:03:56.759812 containerd[1589]: time="2024-10-09T01:03:56.758512705Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 9 01:04:01.980791 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:01.993194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:02.016799 systemd[1]: Reloading requested from client PID 2587 ('systemctl') (unit session-7.scope)... Oct 9 01:04:02.017135 systemd[1]: Reloading... Oct 9 01:04:02.131766 zram_generator::config[2630]: No configuration found. Oct 9 01:04:02.229793 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:04:02.291843 systemd[1]: Reloading finished in 274 ms. Oct 9 01:04:02.344094 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:02.352232 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:02.354095 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:04:02.354457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:02.359481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:02.467135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:02.476644 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:04:02.532681 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:02.533056 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:04:02.533099 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:02.534396 kubelet[2689]: I1009 01:04:02.534336 2689 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:04:03.273361 kubelet[2689]: I1009 01:04:03.273328 2689 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:04:03.273545 kubelet[2689]: I1009 01:04:03.273533 2689 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:04:03.273824 kubelet[2689]: I1009 01:04:03.273809 2689 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:04:03.343183 kubelet[2689]: I1009 01:04:03.343135 2689 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:04:03.343472 kubelet[2689]: E1009 01:04:03.343458 2689 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://78.46.183.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.350845 kubelet[2689]: I1009 01:04:03.350798 2689 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:04:03.352461 kubelet[2689]: I1009 01:04:03.352436 2689 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:04:03.352816 kubelet[2689]: I1009 01:04:03.352796 2689 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:04:03.352986 kubelet[2689]: I1009 01:04:03.352970 2689 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:04:03.353038 kubelet[2689]: I1009 01:04:03.353030 2689 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:04:03.353203 kubelet[2689]: I1009 01:04:03.353189 2689 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:03.355811 kubelet[2689]: I1009 01:04:03.355790 2689 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:04:03.355903 kubelet[2689]: I1009 01:04:03.355893 2689 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:04:03.355983 kubelet[2689]: I1009 01:04:03.355973 2689 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:04:03.356039 kubelet[2689]: I1009 01:04:03.356031 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:04:03.356384 kubelet[2689]: W1009 01:04:03.356339 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://78.46.183.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-5-47b5cb1617&limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.356429 kubelet[2689]: E1009 01:04:03.356395 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.46.183.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-5-47b5cb1617&limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.358253 kubelet[2689]: I1009 01:04:03.358211 2689 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:04:03.358900 kubelet[2689]: I1009 01:04:03.358805 2689 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:04:03.359559 kubelet[2689]: W1009 01:04:03.359541 2689 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:04:03.360989 kubelet[2689]: I1009 01:04:03.360391 2689 server.go:1256] "Started kubelet" Oct 9 01:04:03.360989 kubelet[2689]: W1009 01:04:03.360492 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://78.46.183.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.360989 kubelet[2689]: E1009 01:04:03.360530 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.46.183.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.371564 kubelet[2689]: I1009 01:04:03.371536 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:04:03.373882 kubelet[2689]: I1009 01:04:03.373852 2689 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:04:03.374683 kubelet[2689]: I1009 01:04:03.374633 2689 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:04:03.375602 kubelet[2689]: I1009 01:04:03.375568 2689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:04:03.375781 kubelet[2689]: I1009 01:04:03.375761 2689 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:04:03.377813 kubelet[2689]: E1009 01:04:03.371547 2689 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://78.46.183.65:6443/api/v1/namespaces/default/events\": dial tcp 78.46.183.65:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4116-0-0-5-47b5cb1617.17fca33dec8c2e4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116-0-0-5-47b5cb1617,UID:ci-4116-0-0-5-47b5cb1617,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-5-47b5cb1617,},FirstTimestamp:2024-10-09 01:04:03.360370251 +0000 UTC m=+0.876791550,LastTimestamp:2024-10-09 01:04:03.360370251 +0000 UTC m=+0.876791550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-5-47b5cb1617,}" Oct 9 01:04:03.377813 kubelet[2689]: I1009 01:04:03.377090 2689 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:04:03.378765 kubelet[2689]: I1009 01:04:03.378743 2689 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:04:03.379037 kubelet[2689]: I1009 01:04:03.379022 2689 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:04:03.381288 kubelet[2689]: W1009 01:04:03.381241 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://78.46.183.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.381356 kubelet[2689]: E1009 01:04:03.381298 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.46.183.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.382218 kubelet[2689]: I1009 01:04:03.382199 2689 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:04:03.382969 kubelet[2689]: I1009 01:04:03.382914 2689 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:04:03.384763 kubelet[2689]: E1009 01:04:03.384737 2689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.183.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-5-47b5cb1617?timeout=10s\": dial tcp 78.46.183.65:6443: connect: connection refused" interval="200ms" Oct 9 01:04:03.385664 kubelet[2689]: I1009 01:04:03.385649 2689 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:04:03.410505 kubelet[2689]: I1009 01:04:03.410473 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:04:03.413519 kubelet[2689]: I1009 01:04:03.412523 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:04:03.413519 kubelet[2689]: I1009 01:04:03.412636 2689 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:04:03.413519 kubelet[2689]: I1009 01:04:03.412672 2689 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:04:03.413519 kubelet[2689]: E1009 01:04:03.412769 2689 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:04:03.413519 kubelet[2689]: W1009 01:04:03.413386 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://78.46.183.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.413519 kubelet[2689]: E1009 01:04:03.413441 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.46.183.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:03.416741 kubelet[2689]: I1009 01:04:03.416709 2689 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:04:03.416741 kubelet[2689]: I1009 01:04:03.416729 2689 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:04:03.416861 kubelet[2689]: I1009 01:04:03.416791 2689 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:03.419172 kubelet[2689]: I1009 01:04:03.419145 2689 policy_none.go:49] "None policy: Start" Oct 9 01:04:03.420254 kubelet[2689]: I1009 01:04:03.419882 2689 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:04:03.420254 kubelet[2689]: I1009 01:04:03.419938 2689 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:04:03.429949 kubelet[2689]: I1009 01:04:03.429887 2689 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:04:03.431236 kubelet[2689]: I1009 01:04:03.430813 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:04:03.433843 kubelet[2689]: E1009 01:04:03.433814 2689 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4116-0-0-5-47b5cb1617\" not found" Oct 9 01:04:03.481465 kubelet[2689]: I1009 01:04:03.481431 2689 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.482303 kubelet[2689]: E1009 01:04:03.482260 2689 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.183.65:6443/api/v1/nodes\": dial tcp 78.46.183.65:6443: connect: connection refused" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.513624 kubelet[2689]: I1009 01:04:03.513555 2689 topology_manager.go:215] "Topology Admit Handler" podUID="01d5afc4a5082c9a734ba87c39d8fd43" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.517165 kubelet[2689]: I1009 01:04:03.516798 2689 topology_manager.go:215] "Topology Admit Handler" podUID="289b9ff808f926c57430b52e98893a62" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.520975 kubelet[2689]: I1009 01:04:03.520094 2689 topology_manager.go:215] "Topology Admit Handler" podUID="45a58fe30cafbc86f088d5b5a82b51f9" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.580384 kubelet[2689]: I1009 01:04:03.580279 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581005 kubelet[2689]: I1009 01:04:03.580463 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581529 kubelet[2689]: I1009 01:04:03.581122 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581529 kubelet[2689]: I1009 01:04:03.581218 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01d5afc4a5082c9a734ba87c39d8fd43-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-5-47b5cb1617\" (UID: \"01d5afc4a5082c9a734ba87c39d8fd43\") " pod="kube-system/kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581529 kubelet[2689]: I1009 01:04:03.581281 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01d5afc4a5082c9a734ba87c39d8fd43-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-5-47b5cb1617\" (UID: \"01d5afc4a5082c9a734ba87c39d8fd43\") " pod="kube-system/kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581529 kubelet[2689]: I1009 01:04:03.581313 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581529 kubelet[2689]: I1009 01:04:03.581365 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581760 kubelet[2689]: I1009 01:04:03.581434 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45a58fe30cafbc86f088d5b5a82b51f9-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-5-47b5cb1617\" (UID: \"45a58fe30cafbc86f088d5b5a82b51f9\") " pod="kube-system/kube-scheduler-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.581760 kubelet[2689]: I1009 01:04:03.581497 2689 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01d5afc4a5082c9a734ba87c39d8fd43-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-5-47b5cb1617\" (UID: \"01d5afc4a5082c9a734ba87c39d8fd43\") " pod="kube-system/kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.585858 kubelet[2689]: E1009 01:04:03.585815 2689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.183.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-5-47b5cb1617?timeout=10s\": dial tcp 78.46.183.65:6443: connect: connection refused" interval="400ms" Oct 9 01:04:03.685007 kubelet[2689]: I1009 01:04:03.684308 2689 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.685007 kubelet[2689]: E1009 01:04:03.684751 2689 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.183.65:6443/api/v1/nodes\": dial tcp 78.46.183.65:6443: connect: connection refused" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:03.828332 containerd[1589]: time="2024-10-09T01:04:03.828219871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-5-47b5cb1617,Uid:289b9ff808f926c57430b52e98893a62,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:03.828997 containerd[1589]: time="2024-10-09T01:04:03.828239111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-5-47b5cb1617,Uid:01d5afc4a5082c9a734ba87c39d8fd43,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:03.831636 containerd[1589]: time="2024-10-09T01:04:03.831517842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-5-47b5cb1617,Uid:45a58fe30cafbc86f088d5b5a82b51f9,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:03.890135 update_engine[1569]: I20241009 01:04:03.890058 1569 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 9 01:04:03.890135 update_engine[1569]: I20241009 01:04:03.890120 1569 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 9 01:04:03.890602 update_engine[1569]: I20241009 01:04:03.890368 1569 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 9 01:04:03.890996 update_engine[1569]: I20241009 01:04:03.890820 1569 omaha_request_params.cc:62] Current group set to alpha Oct 9 01:04:03.890996 update_engine[1569]: I20241009 01:04:03.890954 1569 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 9 01:04:03.890996 update_engine[1569]: I20241009 01:04:03.890966 1569 update_attempter.cc:643] Scheduling an action processor start. Oct 9 01:04:03.890996 update_engine[1569]: I20241009 01:04:03.890985 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 01:04:03.891149 update_engine[1569]: I20241009 01:04:03.891019 1569 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 9 01:04:03.891149 update_engine[1569]: I20241009 01:04:03.891083 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 01:04:03.891149 update_engine[1569]: I20241009 01:04:03.891093 1569 omaha_request_action.cc:272] Request: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: Oct 9 01:04:03.891149 update_engine[1569]: I20241009 01:04:03.891100 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:04:03.893449 update_engine[1569]: I20241009 01:04:03.892985 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:04:03.893449 update_engine[1569]: I20241009 01:04:03.893375 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:04:03.893563 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 9 01:04:03.894245 update_engine[1569]: E20241009 01:04:03.894177 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:04:03.894368 update_engine[1569]: I20241009 01:04:03.894297 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 9 01:04:03.986399 kubelet[2689]: E1009 01:04:03.986325 2689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.183.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-5-47b5cb1617?timeout=10s\": dial tcp 78.46.183.65:6443: connect: connection refused" interval="800ms" Oct 9 01:04:04.088115 kubelet[2689]: I1009 01:04:04.087988 2689 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:04.089960 kubelet[2689]: E1009 01:04:04.089891 2689 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.183.65:6443/api/v1/nodes\": dial tcp 78.46.183.65:6443: connect: connection refused" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:04.269235 kubelet[2689]: W1009 01:04:04.269192 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://78.46.183.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.269359 kubelet[2689]: E1009 01:04:04.269249 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://78.46.183.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.359119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622484674.mount: Deactivated successfully. Oct 9 01:04:04.367862 containerd[1589]: time="2024-10-09T01:04:04.367011950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:04.368972 containerd[1589]: time="2024-10-09T01:04:04.368152554Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:04.369259 containerd[1589]: time="2024-10-09T01:04:04.369203957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Oct 9 01:04:04.370086 containerd[1589]: time="2024-10-09T01:04:04.370042880Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:04:04.371844 containerd[1589]: time="2024-10-09T01:04:04.370827763Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:04.371844 containerd[1589]: time="2024-10-09T01:04:04.371803966Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:04:04.372086 containerd[1589]: time="2024-10-09T01:04:04.372060127Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:04.375566 containerd[1589]: time="2024-10-09T01:04:04.375529098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:04:04.377394 containerd[1589]: time="2024-10-09T01:04:04.377358624Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 545.747862ms" Oct 9 01:04:04.381387 containerd[1589]: time="2024-10-09T01:04:04.381336997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.728445ms" Oct 9 01:04:04.383223 containerd[1589]: time="2024-10-09T01:04:04.383188083Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.154289ms" Oct 9 01:04:04.444297 kubelet[2689]: W1009 01:04:04.444216 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://78.46.183.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-5-47b5cb1617&limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.444888 kubelet[2689]: E1009 01:04:04.444829 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://78.46.183.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-5-47b5cb1617&limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.534623 containerd[1589]: time="2024-10-09T01:04:04.534514575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:04.534899 containerd[1589]: time="2024-10-09T01:04:04.534798416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:04.535208 containerd[1589]: time="2024-10-09T01:04:04.535037777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:04.535208 containerd[1589]: time="2024-10-09T01:04:04.535148617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:04.535387 containerd[1589]: time="2024-10-09T01:04:04.535146577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:04.535387 containerd[1589]: time="2024-10-09T01:04:04.535282258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:04.535387 containerd[1589]: time="2024-10-09T01:04:04.535305098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:04.536099 containerd[1589]: time="2024-10-09T01:04:04.536016540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:04.536099 containerd[1589]: time="2024-10-09T01:04:04.535887539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:04.536367 containerd[1589]: time="2024-10-09T01:04:04.536330461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:04.536472 containerd[1589]: time="2024-10-09T01:04:04.536446021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:04.536806 containerd[1589]: time="2024-10-09T01:04:04.536760382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:04.626400 containerd[1589]: time="2024-10-09T01:04:04.625681592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-5-47b5cb1617,Uid:45a58fe30cafbc86f088d5b5a82b51f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad637f41cc628643ef9745d8c48bc52df872bef99ecef3329ee8576c6d5a1def\"" Oct 9 01:04:04.630043 containerd[1589]: time="2024-10-09T01:04:04.629534764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-5-47b5cb1617,Uid:01d5afc4a5082c9a734ba87c39d8fd43,Namespace:kube-system,Attempt:0,} returns sandbox id \"526184118f5e054b24770fa0752a1f61466b9586adf29646b90de4e5fa7dc639\"" Oct 9 01:04:04.633476 containerd[1589]: time="2024-10-09T01:04:04.633431937Z" level=info msg="CreateContainer within sandbox \"ad637f41cc628643ef9745d8c48bc52df872bef99ecef3329ee8576c6d5a1def\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:04:04.634694 containerd[1589]: time="2024-10-09T01:04:04.634587820Z" level=info msg="CreateContainer within sandbox \"526184118f5e054b24770fa0752a1f61466b9586adf29646b90de4e5fa7dc639\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:04:04.639571 containerd[1589]: time="2024-10-09T01:04:04.639449436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-5-47b5cb1617,Uid:289b9ff808f926c57430b52e98893a62,Namespace:kube-system,Attempt:0,} returns sandbox id \"368e04bb3329d22a52df79d2cab7ae0889d7fe14746fa3e0bb7edfa533df67fc\"" Oct 9 01:04:04.642729 containerd[1589]: time="2024-10-09T01:04:04.642691287Z" level=info msg="CreateContainer within sandbox \"368e04bb3329d22a52df79d2cab7ae0889d7fe14746fa3e0bb7edfa533df67fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:04:04.652703 containerd[1589]: time="2024-10-09T01:04:04.652659719Z" level=info msg="CreateContainer within sandbox \"ad637f41cc628643ef9745d8c48bc52df872bef99ecef3329ee8576c6d5a1def\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d643b4bdcf975543ddd95575d9a34c24b8817ec1ae5dbca5b249d70f37fd6934\"" Oct 9 01:04:04.654379 containerd[1589]: time="2024-10-09T01:04:04.653896403Z" level=info msg="StartContainer for \"d643b4bdcf975543ddd95575d9a34c24b8817ec1ae5dbca5b249d70f37fd6934\"" Oct 9 01:04:04.660308 containerd[1589]: time="2024-10-09T01:04:04.660267824Z" level=info msg="CreateContainer within sandbox \"368e04bb3329d22a52df79d2cab7ae0889d7fe14746fa3e0bb7edfa533df67fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c01dc2a5313c93a1d29cca8e28b79034967b95b23d9155f6dc835ff6ebdcfeba\"" Oct 9 01:04:04.661453 containerd[1589]: time="2024-10-09T01:04:04.661361108Z" level=info msg="StartContainer for \"c01dc2a5313c93a1d29cca8e28b79034967b95b23d9155f6dc835ff6ebdcfeba\"" Oct 9 01:04:04.663274 containerd[1589]: time="2024-10-09T01:04:04.663226954Z" level=info msg="CreateContainer within sandbox \"526184118f5e054b24770fa0752a1f61466b9586adf29646b90de4e5fa7dc639\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e8d48f7bc8ae947edbe9f73403356d6080e471fee4b12a8d6e1930979547651e\"" Oct 9 01:04:04.664511 containerd[1589]: time="2024-10-09T01:04:04.663658555Z" level=info msg="StartContainer for \"e8d48f7bc8ae947edbe9f73403356d6080e471fee4b12a8d6e1930979547651e\"" Oct 9 01:04:04.693773 kubelet[2689]: W1009 01:04:04.693715 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://78.46.183.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.694308 kubelet[2689]: E1009 01:04:04.694065 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://78.46.183.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.698044 kubelet[2689]: W1009 01:04:04.698003 2689 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://78.46.183.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.698403 kubelet[2689]: E1009 01:04:04.698386 2689 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://78.46.183.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 78.46.183.65:6443: connect: connection refused Oct 9 01:04:04.754459 containerd[1589]: time="2024-10-09T01:04:04.754414890Z" level=info msg="StartContainer for \"c01dc2a5313c93a1d29cca8e28b79034967b95b23d9155f6dc835ff6ebdcfeba\" returns successfully" Oct 9 01:04:04.777821 containerd[1589]: time="2024-10-09T01:04:04.777780166Z" level=info msg="StartContainer for \"e8d48f7bc8ae947edbe9f73403356d6080e471fee4b12a8d6e1930979547651e\" returns successfully" Oct 9 01:04:04.788294 kubelet[2689]: E1009 01:04:04.788262 2689 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://78.46.183.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-5-47b5cb1617?timeout=10s\": dial tcp 78.46.183.65:6443: connect: connection refused" interval="1.6s" Oct 9 01:04:04.799719 containerd[1589]: time="2024-10-09T01:04:04.799677837Z" level=info msg="StartContainer for \"d643b4bdcf975543ddd95575d9a34c24b8817ec1ae5dbca5b249d70f37fd6934\" returns successfully" Oct 9 01:04:04.894685 kubelet[2689]: I1009 01:04:04.894502 2689 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:04.896369 kubelet[2689]: E1009 01:04:04.896346 2689 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://78.46.183.65:6443/api/v1/nodes\": dial tcp 78.46.183.65:6443: connect: connection refused" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:06.499982 kubelet[2689]: I1009 01:04:06.499188 2689 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:07.812778 kubelet[2689]: E1009 01:04:07.812735 2689 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4116-0-0-5-47b5cb1617\" not found" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:07.815656 kubelet[2689]: I1009 01:04:07.815326 2689 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:07.861043 kubelet[2689]: E1009 01:04:07.861007 2689 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4116-0-0-5-47b5cb1617.17fca33dec8c2e4b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116-0-0-5-47b5cb1617,UID:ci-4116-0-0-5-47b5cb1617,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-5-47b5cb1617,},FirstTimestamp:2024-10-09 01:04:03.360370251 +0000 UTC m=+0.876791550,LastTimestamp:2024-10-09 01:04:03.360370251 +0000 UTC m=+0.876791550,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-5-47b5cb1617,}" Oct 9 01:04:07.924471 kubelet[2689]: E1009 01:04:07.924289 2689 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4116-0-0-5-47b5cb1617.17fca33defa559be default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116-0-0-5-47b5cb1617,UID:ci-4116-0-0-5-47b5cb1617,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4116-0-0-5-47b5cb1617 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-5-47b5cb1617,},FirstTimestamp:2024-10-09 01:04:03.412351422 +0000 UTC m=+0.928772721,LastTimestamp:2024-10-09 01:04:03.412351422 +0000 UTC m=+0.928772721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-5-47b5cb1617,}" Oct 9 01:04:07.944505 kubelet[2689]: E1009 01:04:07.943848 2689 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4116-0-0-5-47b5cb1617\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:08.360120 kubelet[2689]: I1009 01:04:08.360044 2689 apiserver.go:52] "Watching apiserver" Oct 9 01:04:08.380056 kubelet[2689]: I1009 01:04:08.379964 2689 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:04:10.651181 systemd[1]: Reloading requested from client PID 2964 ('systemctl') (unit session-7.scope)... Oct 9 01:04:10.651515 systemd[1]: Reloading... Oct 9 01:04:10.756955 zram_generator::config[3012]: No configuration found. Oct 9 01:04:10.859675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:04:10.927871 systemd[1]: Reloading finished in 275 ms. Oct 9 01:04:10.969294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:10.983469 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:04:10.984377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:10.993840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:11.113068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:11.125398 (kubelet)[3058]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:04:11.198651 kubelet[3058]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:11.198651 kubelet[3058]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:04:11.198651 kubelet[3058]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:04:11.198651 kubelet[3058]: I1009 01:04:11.196676 3058 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:04:11.210025 kubelet[3058]: I1009 01:04:11.209115 3058 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:04:11.210217 kubelet[3058]: I1009 01:04:11.210195 3058 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:04:11.210649 kubelet[3058]: I1009 01:04:11.210507 3058 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:04:11.212366 kubelet[3058]: I1009 01:04:11.212333 3058 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:04:11.214703 kubelet[3058]: I1009 01:04:11.214567 3058 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:04:11.221516 kubelet[3058]: I1009 01:04:11.221475 3058 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:04:11.222476 kubelet[3058]: I1009 01:04:11.222163 3058 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:04:11.222476 kubelet[3058]: I1009 01:04:11.222326 3058 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:04:11.222476 kubelet[3058]: I1009 01:04:11.222344 3058 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:04:11.222476 kubelet[3058]: I1009 01:04:11.222354 3058 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:04:11.222476 kubelet[3058]: I1009 01:04:11.222383 3058 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:11.222716 kubelet[3058]: I1009 01:04:11.222701 3058 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:04:11.223488 kubelet[3058]: I1009 01:04:11.223474 3058 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:04:11.223581 kubelet[3058]: I1009 01:04:11.223572 3058 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:04:11.223651 kubelet[3058]: I1009 01:04:11.223643 3058 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:04:11.227039 kubelet[3058]: I1009 01:04:11.226322 3058 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:04:11.227039 kubelet[3058]: I1009 01:04:11.226616 3058 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:04:11.227267 kubelet[3058]: I1009 01:04:11.227251 3058 server.go:1256] "Started kubelet" Oct 9 01:04:11.230138 kubelet[3058]: I1009 01:04:11.230118 3058 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:04:11.232185 kubelet[3058]: I1009 01:04:11.232159 3058 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:04:11.237323 kubelet[3058]: I1009 01:04:11.237301 3058 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:04:11.239723 kubelet[3058]: I1009 01:04:11.239703 3058 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:04:11.240006 kubelet[3058]: I1009 01:04:11.239991 3058 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:04:11.242630 kubelet[3058]: I1009 01:04:11.242600 3058 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:04:11.244164 kubelet[3058]: I1009 01:04:11.244145 3058 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:04:11.244422 kubelet[3058]: I1009 01:04:11.244410 3058 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:04:11.260182 kubelet[3058]: I1009 01:04:11.260155 3058 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:04:11.264950 kubelet[3058]: I1009 01:04:11.264071 3058 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:04:11.279867 kubelet[3058]: I1009 01:04:11.279836 3058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:04:11.281758 kubelet[3058]: I1009 01:04:11.281735 3058 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:04:11.281857 kubelet[3058]: I1009 01:04:11.281848 3058 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:04:11.281914 kubelet[3058]: I1009 01:04:11.281906 3058 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:04:11.282062 kubelet[3058]: E1009 01:04:11.282050 3058 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:04:11.307776 kubelet[3058]: I1009 01:04:11.307720 3058 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:04:11.309590 kubelet[3058]: E1009 01:04:11.309120 3058 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:04:11.349114 kubelet[3058]: I1009 01:04:11.348999 3058 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.358773 kubelet[3058]: I1009 01:04:11.358638 3058 kubelet_node_status.go:112] "Node was previously registered" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.360141 kubelet[3058]: I1009 01:04:11.359994 3058 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.385618 kubelet[3058]: E1009 01:04:11.385552 3058 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:04:11.396890 kubelet[3058]: I1009 01:04:11.396803 3058 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:04:11.396890 kubelet[3058]: I1009 01:04:11.396860 3058 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:04:11.397149 kubelet[3058]: I1009 01:04:11.396962 3058 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:04:11.397430 kubelet[3058]: I1009 01:04:11.397327 3058 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:04:11.397430 kubelet[3058]: I1009 01:04:11.397368 3058 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:04:11.397430 kubelet[3058]: I1009 01:04:11.397376 3058 policy_none.go:49] "None policy: Start" Oct 9 01:04:11.398407 kubelet[3058]: I1009 01:04:11.398358 3058 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:04:11.398595 kubelet[3058]: I1009 01:04:11.398379 3058 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:04:11.398802 kubelet[3058]: I1009 01:04:11.398720 3058 state_mem.go:75] "Updated machine memory state" Oct 9 01:04:11.400178 kubelet[3058]: I1009 01:04:11.400163 3058 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:04:11.401343 kubelet[3058]: I1009 01:04:11.401158 3058 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:04:11.589839 kubelet[3058]: I1009 01:04:11.586620 3058 topology_manager.go:215] "Topology Admit Handler" podUID="01d5afc4a5082c9a734ba87c39d8fd43" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.589839 kubelet[3058]: I1009 01:04:11.586735 3058 topology_manager.go:215] "Topology Admit Handler" podUID="289b9ff808f926c57430b52e98893a62" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.589839 kubelet[3058]: I1009 01:04:11.586854 3058 topology_manager.go:215] "Topology Admit Handler" podUID="45a58fe30cafbc86f088d5b5a82b51f9" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.647187 kubelet[3058]: I1009 01:04:11.647154 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01d5afc4a5082c9a734ba87c39d8fd43-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-5-47b5cb1617\" (UID: \"01d5afc4a5082c9a734ba87c39d8fd43\") " pod="kube-system/kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.647405 kubelet[3058]: I1009 01:04:11.647393 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01d5afc4a5082c9a734ba87c39d8fd43-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-5-47b5cb1617\" (UID: \"01d5afc4a5082c9a734ba87c39d8fd43\") " pod="kube-system/kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.647537 kubelet[3058]: I1009 01:04:11.647526 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01d5afc4a5082c9a734ba87c39d8fd43-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-5-47b5cb1617\" (UID: \"01d5afc4a5082c9a734ba87c39d8fd43\") " pod="kube-system/kube-apiserver-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.647641 kubelet[3058]: I1009 01:04:11.647630 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.647727 kubelet[3058]: I1009 01:04:11.647718 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45a58fe30cafbc86f088d5b5a82b51f9-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-5-47b5cb1617\" (UID: \"45a58fe30cafbc86f088d5b5a82b51f9\") " pod="kube-system/kube-scheduler-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.647839 kubelet[3058]: I1009 01:04:11.647830 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.647956 kubelet[3058]: I1009 01:04:11.647942 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.648062 kubelet[3058]: I1009 01:04:11.648052 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:11.648225 kubelet[3058]: I1009 01:04:11.648213 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/289b9ff808f926c57430b52e98893a62-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-5-47b5cb1617\" (UID: \"289b9ff808f926c57430b52e98893a62\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:12.234943 kubelet[3058]: I1009 01:04:12.234846 3058 apiserver.go:52] "Watching apiserver" Oct 9 01:04:12.245158 kubelet[3058]: I1009 01:04:12.245091 3058 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:04:12.401957 kubelet[3058]: I1009 01:04:12.400694 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4116-0-0-5-47b5cb1617" podStartSLOduration=1.4006460889999999 podStartE2EDuration="1.400646089s" podCreationTimestamp="2024-10-09 01:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:12.376068056 +0000 UTC m=+1.246695891" watchObservedRunningTime="2024-10-09 01:04:12.400646089 +0000 UTC m=+1.271273924" Oct 9 01:04:12.429694 kubelet[3058]: I1009 01:04:12.429632 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4116-0-0-5-47b5cb1617" podStartSLOduration=1.429594895 podStartE2EDuration="1.429594895s" podCreationTimestamp="2024-10-09 01:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:12.404019939 +0000 UTC m=+1.274647774" watchObservedRunningTime="2024-10-09 01:04:12.429594895 +0000 UTC m=+1.300222730" Oct 9 01:04:12.430128 kubelet[3058]: I1009 01:04:12.430001 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4116-0-0-5-47b5cb1617" podStartSLOduration=1.429976056 podStartE2EDuration="1.429976056s" podCreationTimestamp="2024-10-09 01:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:12.429391815 +0000 UTC m=+1.300019650" watchObservedRunningTime="2024-10-09 01:04:12.429976056 +0000 UTC m=+1.300603891" Oct 9 01:04:13.888131 update_engine[1569]: I20241009 01:04:13.888033 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:04:13.888612 update_engine[1569]: I20241009 01:04:13.888403 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:04:13.888746 update_engine[1569]: I20241009 01:04:13.888705 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:04:13.889653 update_engine[1569]: E20241009 01:04:13.889598 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:04:13.889737 update_engine[1569]: I20241009 01:04:13.889690 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 9 01:04:16.108816 sudo[2077]: pam_unix(sudo:session): session closed for user root Oct 9 01:04:16.270358 sshd[2073]: pam_unix(sshd:session): session closed for user core Oct 9 01:04:16.275789 systemd[1]: sshd@7-78.46.183.65:22-147.75.109.163:50802.service: Deactivated successfully. Oct 9 01:04:16.283239 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:04:16.285348 systemd-logind[1565]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:04:16.286360 systemd-logind[1565]: Removed session 7. Oct 9 01:04:23.417788 kubelet[3058]: I1009 01:04:23.417338 3058 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:04:23.419118 containerd[1589]: time="2024-10-09T01:04:23.419074163Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:04:23.420061 kubelet[3058]: I1009 01:04:23.420028 3058 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:04:23.887137 update_engine[1569]: I20241009 01:04:23.887029 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:04:23.887757 update_engine[1569]: I20241009 01:04:23.887396 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:04:23.887757 update_engine[1569]: I20241009 01:04:23.887662 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:04:23.888581 update_engine[1569]: E20241009 01:04:23.888479 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:04:23.888670 update_engine[1569]: I20241009 01:04:23.888609 1569 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 9 01:04:24.318900 kubelet[3058]: I1009 01:04:24.318579 3058 topology_manager.go:215] "Topology Admit Handler" podUID="f9943364-70fb-48ec-b9b9-1f7b48f54db8" podNamespace="kube-system" podName="kube-proxy-pzq95" Oct 9 01:04:24.428673 kubelet[3058]: I1009 01:04:24.428618 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr255\" (UniqueName: \"kubernetes.io/projected/f9943364-70fb-48ec-b9b9-1f7b48f54db8-kube-api-access-qr255\") pod \"kube-proxy-pzq95\" (UID: \"f9943364-70fb-48ec-b9b9-1f7b48f54db8\") " pod="kube-system/kube-proxy-pzq95" Oct 9 01:04:24.429394 kubelet[3058]: I1009 01:04:24.428701 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9943364-70fb-48ec-b9b9-1f7b48f54db8-xtables-lock\") pod \"kube-proxy-pzq95\" (UID: \"f9943364-70fb-48ec-b9b9-1f7b48f54db8\") " pod="kube-system/kube-proxy-pzq95" Oct 9 01:04:24.429394 kubelet[3058]: I1009 01:04:24.428746 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9943364-70fb-48ec-b9b9-1f7b48f54db8-lib-modules\") pod \"kube-proxy-pzq95\" (UID: \"f9943364-70fb-48ec-b9b9-1f7b48f54db8\") " pod="kube-system/kube-proxy-pzq95" Oct 9 01:04:24.429394 kubelet[3058]: I1009 01:04:24.428823 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9943364-70fb-48ec-b9b9-1f7b48f54db8-kube-proxy\") pod \"kube-proxy-pzq95\" (UID: \"f9943364-70fb-48ec-b9b9-1f7b48f54db8\") " pod="kube-system/kube-proxy-pzq95" Oct 9 01:04:24.519360 kubelet[3058]: I1009 01:04:24.519238 3058 topology_manager.go:215] "Topology Admit Handler" podUID="d7a33b2d-aafe-45d5-af24-8ee9153b5f65" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-ppmrf" Oct 9 01:04:24.626661 containerd[1589]: time="2024-10-09T01:04:24.625991547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzq95,Uid:f9943364-70fb-48ec-b9b9-1f7b48f54db8,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:24.630291 kubelet[3058]: I1009 01:04:24.630146 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kz8m\" (UniqueName: \"kubernetes.io/projected/d7a33b2d-aafe-45d5-af24-8ee9153b5f65-kube-api-access-8kz8m\") pod \"tigera-operator-5d56685c77-ppmrf\" (UID: \"d7a33b2d-aafe-45d5-af24-8ee9153b5f65\") " pod="tigera-operator/tigera-operator-5d56685c77-ppmrf" Oct 9 01:04:24.630291 kubelet[3058]: I1009 01:04:24.630214 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7a33b2d-aafe-45d5-af24-8ee9153b5f65-var-lib-calico\") pod \"tigera-operator-5d56685c77-ppmrf\" (UID: \"d7a33b2d-aafe-45d5-af24-8ee9153b5f65\") " pod="tigera-operator/tigera-operator-5d56685c77-ppmrf" Oct 9 01:04:24.653507 containerd[1589]: time="2024-10-09T01:04:24.653392140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:24.653670 containerd[1589]: time="2024-10-09T01:04:24.653493460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:24.653670 containerd[1589]: time="2024-10-09T01:04:24.653591181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:24.653821 containerd[1589]: time="2024-10-09T01:04:24.653752141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:24.691117 containerd[1589]: time="2024-10-09T01:04:24.691006600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pzq95,Uid:f9943364-70fb-48ec-b9b9-1f7b48f54db8,Namespace:kube-system,Attempt:0,} returns sandbox id \"69587dbfa16569b0912a59c9391e414db910cdd7eea0e04371c847da74dbd72a\"" Oct 9 01:04:24.694762 containerd[1589]: time="2024-10-09T01:04:24.694662490Z" level=info msg="CreateContainer within sandbox \"69587dbfa16569b0912a59c9391e414db910cdd7eea0e04371c847da74dbd72a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:04:24.712048 containerd[1589]: time="2024-10-09T01:04:24.711995776Z" level=info msg="CreateContainer within sandbox \"69587dbfa16569b0912a59c9391e414db910cdd7eea0e04371c847da74dbd72a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bdead244db52484374028a44cfde6a364bd6e9744b134578d26ac4f190f9d9c\"" Oct 9 01:04:24.713993 containerd[1589]: time="2024-10-09T01:04:24.712814618Z" level=info msg="StartContainer for \"6bdead244db52484374028a44cfde6a364bd6e9744b134578d26ac4f190f9d9c\"" Oct 9 01:04:24.782035 containerd[1589]: time="2024-10-09T01:04:24.781986882Z" level=info msg="StartContainer for \"6bdead244db52484374028a44cfde6a364bd6e9744b134578d26ac4f190f9d9c\" returns successfully" Oct 9 01:04:24.825734 containerd[1589]: time="2024-10-09T01:04:24.825338078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ppmrf,Uid:d7a33b2d-aafe-45d5-af24-8ee9153b5f65,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:04:24.858000 containerd[1589]: time="2024-10-09T01:04:24.856080039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:24.858000 containerd[1589]: time="2024-10-09T01:04:24.856135959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:24.858000 containerd[1589]: time="2024-10-09T01:04:24.856146920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:24.858000 containerd[1589]: time="2024-10-09T01:04:24.856240000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:24.931247 containerd[1589]: time="2024-10-09T01:04:24.931125999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ppmrf,Uid:d7a33b2d-aafe-45d5-af24-8ee9153b5f65,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7f79b8feb1e0b686a729f96ea5bc0e1f79b3bca1d4c3a750c2e1295b4a992186\"" Oct 9 01:04:24.934256 containerd[1589]: time="2024-10-09T01:04:24.934224687Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:04:26.453430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067532941.mount: Deactivated successfully. Oct 9 01:04:26.787222 containerd[1589]: time="2024-10-09T01:04:26.786957444Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:26.788088 containerd[1589]: time="2024-10-09T01:04:26.787838766Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485927" Oct 9 01:04:26.788843 containerd[1589]: time="2024-10-09T01:04:26.788805689Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:26.791326 containerd[1589]: time="2024-10-09T01:04:26.791288775Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:26.792489 containerd[1589]: time="2024-10-09T01:04:26.792263578Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.857546289s" Oct 9 01:04:26.792489 containerd[1589]: time="2024-10-09T01:04:26.792294698Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 9 01:04:26.795271 containerd[1589]: time="2024-10-09T01:04:26.795152265Z" level=info msg="CreateContainer within sandbox \"7f79b8feb1e0b686a729f96ea5bc0e1f79b3bca1d4c3a750c2e1295b4a992186\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:04:26.806709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731549091.mount: Deactivated successfully. Oct 9 01:04:26.808850 containerd[1589]: time="2024-10-09T01:04:26.808794301Z" level=info msg="CreateContainer within sandbox \"7f79b8feb1e0b686a729f96ea5bc0e1f79b3bca1d4c3a750c2e1295b4a992186\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb2a0ce78f8e0b70ad07d33e3bcdea1f76c3cd6f98ef64ff5316e4639d81fd76\"" Oct 9 01:04:26.810741 containerd[1589]: time="2024-10-09T01:04:26.810668826Z" level=info msg="StartContainer for \"eb2a0ce78f8e0b70ad07d33e3bcdea1f76c3cd6f98ef64ff5316e4639d81fd76\"" Oct 9 01:04:26.866500 containerd[1589]: time="2024-10-09T01:04:26.865915811Z" level=info msg="StartContainer for \"eb2a0ce78f8e0b70ad07d33e3bcdea1f76c3cd6f98ef64ff5316e4639d81fd76\" returns successfully" Oct 9 01:04:27.398153 kubelet[3058]: I1009 01:04:27.396742 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pzq95" podStartSLOduration=3.396662953 podStartE2EDuration="3.396662953s" podCreationTimestamp="2024-10-09 01:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:25.387260125 +0000 UTC m=+14.257887920" watchObservedRunningTime="2024-10-09 01:04:27.396662953 +0000 UTC m=+16.267290828" Oct 9 01:04:30.805045 kubelet[3058]: I1009 01:04:30.805004 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-ppmrf" podStartSLOduration=4.944752939 podStartE2EDuration="6.804958155s" podCreationTimestamp="2024-10-09 01:04:24 +0000 UTC" firstStartedPulling="2024-10-09 01:04:24.932675403 +0000 UTC m=+13.803303198" lastFinishedPulling="2024-10-09 01:04:26.792880579 +0000 UTC m=+15.663508414" observedRunningTime="2024-10-09 01:04:27.398186157 +0000 UTC m=+16.268814072" watchObservedRunningTime="2024-10-09 01:04:30.804958155 +0000 UTC m=+19.675585990" Oct 9 01:04:30.805556 kubelet[3058]: I1009 01:04:30.805120 3058 topology_manager.go:215] "Topology Admit Handler" podUID="3cb1d62a-2af6-46cf-8dec-35d079684d8d" podNamespace="calico-system" podName="calico-typha-6b45c46b5d-cfkrf" Oct 9 01:04:30.870051 kubelet[3058]: I1009 01:04:30.870012 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3cb1d62a-2af6-46cf-8dec-35d079684d8d-typha-certs\") pod \"calico-typha-6b45c46b5d-cfkrf\" (UID: \"3cb1d62a-2af6-46cf-8dec-35d079684d8d\") " pod="calico-system/calico-typha-6b45c46b5d-cfkrf" Oct 9 01:04:30.870185 kubelet[3058]: I1009 01:04:30.870067 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb1d62a-2af6-46cf-8dec-35d079684d8d-tigera-ca-bundle\") pod \"calico-typha-6b45c46b5d-cfkrf\" (UID: \"3cb1d62a-2af6-46cf-8dec-35d079684d8d\") " pod="calico-system/calico-typha-6b45c46b5d-cfkrf" Oct 9 01:04:30.870185 kubelet[3058]: I1009 01:04:30.870093 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvs9d\" (UniqueName: \"kubernetes.io/projected/3cb1d62a-2af6-46cf-8dec-35d079684d8d-kube-api-access-pvs9d\") pod \"calico-typha-6b45c46b5d-cfkrf\" (UID: \"3cb1d62a-2af6-46cf-8dec-35d079684d8d\") " pod="calico-system/calico-typha-6b45c46b5d-cfkrf" Oct 9 01:04:30.918452 kubelet[3058]: I1009 01:04:30.918399 3058 topology_manager.go:215] "Topology Admit Handler" podUID="79ac5b64-e35a-447e-b1d0-fc6e770c592c" podNamespace="calico-system" podName="calico-node-gm4ng" Oct 9 01:04:31.041755 kubelet[3058]: I1009 01:04:31.041615 3058 topology_manager.go:215] "Topology Admit Handler" podUID="962372e4-80c6-4e39-87f0-b400601741aa" podNamespace="calico-system" podName="csi-node-driver-8w6fh" Oct 9 01:04:31.044071 kubelet[3058]: E1009 01:04:31.043129 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8w6fh" podUID="962372e4-80c6-4e39-87f0-b400601741aa" Oct 9 01:04:31.076254 kubelet[3058]: I1009 01:04:31.076126 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-var-run-calico\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076254 kubelet[3058]: I1009 01:04:31.076177 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-flexvol-driver-host\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076254 kubelet[3058]: I1009 01:04:31.076205 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-lib-modules\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076254 kubelet[3058]: I1009 01:04:31.076229 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79ac5b64-e35a-447e-b1d0-fc6e770c592c-tigera-ca-bundle\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076254 kubelet[3058]: I1009 01:04:31.076257 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-cni-log-dir\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076503 kubelet[3058]: I1009 01:04:31.076281 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/79ac5b64-e35a-447e-b1d0-fc6e770c592c-node-certs\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076503 kubelet[3058]: I1009 01:04:31.076307 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-xtables-lock\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076503 kubelet[3058]: I1009 01:04:31.076359 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-var-lib-calico\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076503 kubelet[3058]: I1009 01:04:31.076386 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-policysync\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076503 kubelet[3058]: I1009 01:04:31.076414 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-cni-net-dir\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076853 kubelet[3058]: I1009 01:04:31.076439 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snwzp\" (UniqueName: \"kubernetes.io/projected/79ac5b64-e35a-447e-b1d0-fc6e770c592c-kube-api-access-snwzp\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.076853 kubelet[3058]: I1009 01:04:31.076463 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/79ac5b64-e35a-447e-b1d0-fc6e770c592c-cni-bin-dir\") pod \"calico-node-gm4ng\" (UID: \"79ac5b64-e35a-447e-b1d0-fc6e770c592c\") " pod="calico-system/calico-node-gm4ng" Oct 9 01:04:31.117741 containerd[1589]: time="2024-10-09T01:04:31.117675949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b45c46b5d-cfkrf,Uid:3cb1d62a-2af6-46cf-8dec-35d079684d8d,Namespace:calico-system,Attempt:0,}" Oct 9 01:04:31.163353 containerd[1589]: time="2024-10-09T01:04:31.162900183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:31.163353 containerd[1589]: time="2024-10-09T01:04:31.162990823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:31.163353 containerd[1589]: time="2024-10-09T01:04:31.163007183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:31.163353 containerd[1589]: time="2024-10-09T01:04:31.163106664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:31.177712 kubelet[3058]: I1009 01:04:31.177676 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzrps\" (UniqueName: \"kubernetes.io/projected/962372e4-80c6-4e39-87f0-b400601741aa-kube-api-access-dzrps\") pod \"csi-node-driver-8w6fh\" (UID: \"962372e4-80c6-4e39-87f0-b400601741aa\") " pod="calico-system/csi-node-driver-8w6fh" Oct 9 01:04:31.177848 kubelet[3058]: I1009 01:04:31.177755 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/962372e4-80c6-4e39-87f0-b400601741aa-varrun\") pod \"csi-node-driver-8w6fh\" (UID: \"962372e4-80c6-4e39-87f0-b400601741aa\") " pod="calico-system/csi-node-driver-8w6fh" Oct 9 01:04:31.177848 kubelet[3058]: I1009 01:04:31.177820 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/962372e4-80c6-4e39-87f0-b400601741aa-socket-dir\") pod \"csi-node-driver-8w6fh\" (UID: \"962372e4-80c6-4e39-87f0-b400601741aa\") " pod="calico-system/csi-node-driver-8w6fh" Oct 9 01:04:31.177908 kubelet[3058]: I1009 01:04:31.177901 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/962372e4-80c6-4e39-87f0-b400601741aa-kubelet-dir\") pod \"csi-node-driver-8w6fh\" (UID: \"962372e4-80c6-4e39-87f0-b400601741aa\") " pod="calico-system/csi-node-driver-8w6fh" Oct 9 01:04:31.178287 kubelet[3058]: I1009 01:04:31.177947 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/962372e4-80c6-4e39-87f0-b400601741aa-registration-dir\") pod \"csi-node-driver-8w6fh\" (UID: \"962372e4-80c6-4e39-87f0-b400601741aa\") " pod="calico-system/csi-node-driver-8w6fh" Oct 9 01:04:31.186068 kubelet[3058]: E1009 01:04:31.185589 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.186068 kubelet[3058]: W1009 01:04:31.186070 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.187885 kubelet[3058]: E1009 01:04:31.186449 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.188423 kubelet[3058]: E1009 01:04:31.188397 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.188480 kubelet[3058]: W1009 01:04:31.188424 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.188480 kubelet[3058]: E1009 01:04:31.188451 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.190741 kubelet[3058]: E1009 01:04:31.190712 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.190741 kubelet[3058]: W1009 01:04:31.190736 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.190861 kubelet[3058]: E1009 01:04:31.190757 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.208310 kubelet[3058]: E1009 01:04:31.208249 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.208310 kubelet[3058]: W1009 01:04:31.208272 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.210372 kubelet[3058]: E1009 01:04:31.208408 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.222794 containerd[1589]: time="2024-10-09T01:04:31.222719414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gm4ng,Uid:79ac5b64-e35a-447e-b1d0-fc6e770c592c,Namespace:calico-system,Attempt:0,}" Oct 9 01:04:31.270403 containerd[1589]: time="2024-10-09T01:04:31.270315455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:31.270569 containerd[1589]: time="2024-10-09T01:04:31.270396735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:31.270569 containerd[1589]: time="2024-10-09T01:04:31.270414215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:31.270569 containerd[1589]: time="2024-10-09T01:04:31.270514935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:31.279665 kubelet[3058]: E1009 01:04:31.279624 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.279665 kubelet[3058]: W1009 01:04:31.279644 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.279665 kubelet[3058]: E1009 01:04:31.279666 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.280864 kubelet[3058]: E1009 01:04:31.280777 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.280864 kubelet[3058]: W1009 01:04:31.280795 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.280864 kubelet[3058]: E1009 01:04:31.280816 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.281291 kubelet[3058]: E1009 01:04:31.281271 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.284101 kubelet[3058]: W1009 01:04:31.281302 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.284101 kubelet[3058]: E1009 01:04:31.281437 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.284101 kubelet[3058]: E1009 01:04:31.281657 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.284101 kubelet[3058]: W1009 01:04:31.281668 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.284101 kubelet[3058]: E1009 01:04:31.281873 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.284101 kubelet[3058]: W1009 01:04:31.281882 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.284101 kubelet[3058]: E1009 01:04:31.281895 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.284101 kubelet[3058]: E1009 01:04:31.282094 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.284101 kubelet[3058]: W1009 01:04:31.282102 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.284101 kubelet[3058]: E1009 01:04:31.282131 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.284835 kubelet[3058]: E1009 01:04:31.282282 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.284835 kubelet[3058]: W1009 01:04:31.282290 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.284835 kubelet[3058]: E1009 01:04:31.282300 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.284835 kubelet[3058]: E1009 01:04:31.282494 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.284835 kubelet[3058]: W1009 01:04:31.282516 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.284835 kubelet[3058]: E1009 01:04:31.282529 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.284835 kubelet[3058]: E1009 01:04:31.282723 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.284835 kubelet[3058]: W1009 01:04:31.282732 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.284835 kubelet[3058]: E1009 01:04:31.282743 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.284835 kubelet[3058]: E1009 01:04:31.282962 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.285101 kubelet[3058]: W1009 01:04:31.282971 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.285101 kubelet[3058]: E1009 01:04:31.282981 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.285101 kubelet[3058]: E1009 01:04:31.283242 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.285101 kubelet[3058]: W1009 01:04:31.283253 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.285101 kubelet[3058]: E1009 01:04:31.283282 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.285101 kubelet[3058]: E1009 01:04:31.283442 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.285101 kubelet[3058]: W1009 01:04:31.283450 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.285101 kubelet[3058]: E1009 01:04:31.283461 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.285101 kubelet[3058]: E1009 01:04:31.283611 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.285101 kubelet[3058]: W1009 01:04:31.283619 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.285319 kubelet[3058]: E1009 01:04:31.283629 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.285319 kubelet[3058]: E1009 01:04:31.283860 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.285319 kubelet[3058]: W1009 01:04:31.283874 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.285319 kubelet[3058]: E1009 01:04:31.283886 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.285319 kubelet[3058]: E1009 01:04:31.283908 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.287039 kubelet[3058]: E1009 01:04:31.286787 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.287039 kubelet[3058]: W1009 01:04:31.286813 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.287039 kubelet[3058]: E1009 01:04:31.286837 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.290007 containerd[1589]: time="2024-10-09T01:04:31.289959344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b45c46b5d-cfkrf,Uid:3cb1d62a-2af6-46cf-8dec-35d079684d8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd1cc756df9f61d25145ba461567e126abcfdb08d5cc86570a0bfe5842753b54\"" Oct 9 01:04:31.291558 kubelet[3058]: E1009 01:04:31.291536 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.291558 kubelet[3058]: W1009 01:04:31.291557 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.291887 kubelet[3058]: E1009 01:04:31.291639 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.297190 kubelet[3058]: E1009 01:04:31.297032 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.297190 kubelet[3058]: W1009 01:04:31.297056 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.297448 kubelet[3058]: E1009 01:04:31.297347 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.297448 kubelet[3058]: W1009 01:04:31.297356 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.297581 kubelet[3058]: E1009 01:04:31.297520 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.297581 kubelet[3058]: E1009 01:04:31.297550 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.298097 kubelet[3058]: E1009 01:04:31.297918 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.298097 kubelet[3058]: W1009 01:04:31.297940 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.298637 kubelet[3058]: E1009 01:04:31.298483 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.298637 kubelet[3058]: E1009 01:04:31.298579 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.298637 kubelet[3058]: W1009 01:04:31.298587 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.298637 kubelet[3058]: E1009 01:04:31.298602 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.300528 kubelet[3058]: E1009 01:04:31.300204 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.300528 kubelet[3058]: W1009 01:04:31.300218 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.300834 kubelet[3058]: E1009 01:04:31.300697 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.300834 kubelet[3058]: W1009 01:04:31.300709 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.301218 kubelet[3058]: E1009 01:04:31.301125 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.301655 kubelet[3058]: E1009 01:04:31.301306 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.301880 kubelet[3058]: E1009 01:04:31.301825 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.301880 kubelet[3058]: W1009 01:04:31.301836 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.302298 kubelet[3058]: E1009 01:04:31.302233 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.304191 kubelet[3058]: E1009 01:04:31.303822 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.304191 kubelet[3058]: W1009 01:04:31.303837 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.304309 containerd[1589]: time="2024-10-09T01:04:31.304050580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:04:31.305941 kubelet[3058]: E1009 01:04:31.305591 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.306851 kubelet[3058]: E1009 01:04:31.306631 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.306851 kubelet[3058]: W1009 01:04:31.306820 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.307319 kubelet[3058]: E1009 01:04:31.306915 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.328937 kubelet[3058]: E1009 01:04:31.327054 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:31.329523 kubelet[3058]: W1009 01:04:31.329498 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:31.329970 kubelet[3058]: E1009 01:04:31.329622 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:31.376258 containerd[1589]: time="2024-10-09T01:04:31.376185362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gm4ng,Uid:79ac5b64-e35a-447e-b1d0-fc6e770c592c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a1d9cf9f8172d542a11772a537c35578423c3f1afaf0ae366c0e376edac4f0d1\"" Oct 9 01:04:31.985366 systemd[1]: run-containerd-runc-k8s.io-cd1cc756df9f61d25145ba461567e126abcfdb08d5cc86570a0bfe5842753b54-runc.taYjN0.mount: Deactivated successfully. Oct 9 01:04:33.286176 kubelet[3058]: E1009 01:04:33.285445 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8w6fh" podUID="962372e4-80c6-4e39-87f0-b400601741aa" Oct 9 01:04:33.696199 containerd[1589]: time="2024-10-09T01:04:33.695628826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:33.699179 containerd[1589]: time="2024-10-09T01:04:33.699120515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 9 01:04:33.704283 containerd[1589]: time="2024-10-09T01:04:33.704104207Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:33.709327 containerd[1589]: time="2024-10-09T01:04:33.708746539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:33.712024 containerd[1589]: time="2024-10-09T01:04:33.709862581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.405768721s" Oct 9 01:04:33.712024 containerd[1589]: time="2024-10-09T01:04:33.710204142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 9 01:04:33.713034 containerd[1589]: time="2024-10-09T01:04:33.713008389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:04:33.735624 containerd[1589]: time="2024-10-09T01:04:33.735590486Z" level=info msg="CreateContainer within sandbox \"cd1cc756df9f61d25145ba461567e126abcfdb08d5cc86570a0bfe5842753b54\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:04:33.756970 containerd[1589]: time="2024-10-09T01:04:33.756895219Z" level=info msg="CreateContainer within sandbox \"cd1cc756df9f61d25145ba461567e126abcfdb08d5cc86570a0bfe5842753b54\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"37ca49de4c98649cf2b1d059f08972e2b5e0539537c8370a0ce5d4625e5d01fc\"" Oct 9 01:04:33.758736 containerd[1589]: time="2024-10-09T01:04:33.758699983Z" level=info msg="StartContainer for \"37ca49de4c98649cf2b1d059f08972e2b5e0539537c8370a0ce5d4625e5d01fc\"" Oct 9 01:04:33.827954 containerd[1589]: time="2024-10-09T01:04:33.827598395Z" level=info msg="StartContainer for \"37ca49de4c98649cf2b1d059f08972e2b5e0539537c8370a0ce5d4625e5d01fc\" returns successfully" Oct 9 01:04:33.889100 update_engine[1569]: I20241009 01:04:33.889000 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:04:33.889437 update_engine[1569]: I20241009 01:04:33.889254 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:04:33.889437 update_engine[1569]: I20241009 01:04:33.889423 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:04:33.891996 update_engine[1569]: E20241009 01:04:33.891957 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:04:33.892096 update_engine[1569]: I20241009 01:04:33.892022 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 01:04:33.892096 update_engine[1569]: I20241009 01:04:33.892030 1569 omaha_request_action.cc:617] Omaha request response: Oct 9 01:04:33.892143 update_engine[1569]: E20241009 01:04:33.892117 1569 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 9 01:04:33.892143 update_engine[1569]: I20241009 01:04:33.892134 1569 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 9 01:04:33.892143 update_engine[1569]: I20241009 01:04:33.892139 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 01:04:33.892205 update_engine[1569]: I20241009 01:04:33.892144 1569 update_attempter.cc:306] Processing Done. Oct 9 01:04:33.892205 update_engine[1569]: E20241009 01:04:33.892156 1569 update_attempter.cc:619] Update failed. Oct 9 01:04:33.892205 update_engine[1569]: I20241009 01:04:33.892161 1569 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 9 01:04:33.892205 update_engine[1569]: I20241009 01:04:33.892165 1569 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 9 01:04:33.892205 update_engine[1569]: I20241009 01:04:33.892172 1569 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 9 01:04:33.892356 update_engine[1569]: I20241009 01:04:33.892236 1569 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 9 01:04:33.892356 update_engine[1569]: I20241009 01:04:33.892259 1569 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 9 01:04:33.892356 update_engine[1569]: I20241009 01:04:33.892263 1569 omaha_request_action.cc:272] Request: Oct 9 01:04:33.892356 update_engine[1569]: Oct 9 01:04:33.892356 update_engine[1569]: Oct 9 01:04:33.892356 update_engine[1569]: Oct 9 01:04:33.892356 update_engine[1569]: Oct 9 01:04:33.892356 update_engine[1569]: Oct 9 01:04:33.892356 update_engine[1569]: Oct 9 01:04:33.892356 update_engine[1569]: I20241009 01:04:33.892269 1569 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 9 01:04:33.892884 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 9 01:04:33.893916 update_engine[1569]: I20241009 01:04:33.893409 1569 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 9 01:04:33.893916 update_engine[1569]: I20241009 01:04:33.893594 1569 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 9 01:04:33.894624 update_engine[1569]: E20241009 01:04:33.894580 1569 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 9 01:04:33.894672 update_engine[1569]: I20241009 01:04:33.894639 1569 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 9 01:04:33.894672 update_engine[1569]: I20241009 01:04:33.894647 1569 omaha_request_action.cc:617] Omaha request response: Oct 9 01:04:33.894672 update_engine[1569]: I20241009 01:04:33.894654 1569 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 01:04:33.894672 update_engine[1569]: I20241009 01:04:33.894658 1569 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 9 01:04:33.894672 update_engine[1569]: I20241009 01:04:33.894663 1569 update_attempter.cc:306] Processing Done. Oct 9 01:04:33.894672 update_engine[1569]: I20241009 01:04:33.894670 1569 update_attempter.cc:310] Error event sent. Oct 9 01:04:33.895262 update_engine[1569]: I20241009 01:04:33.894677 1569 update_check_scheduler.cc:74] Next update check in 45m57s Oct 9 01:04:33.895291 locksmithd[1613]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 9 01:04:34.499439 kubelet[3058]: E1009 01:04:34.499281 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.499439 kubelet[3058]: W1009 01:04:34.499305 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.499439 kubelet[3058]: E1009 01:04:34.499331 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.500362 kubelet[3058]: E1009 01:04:34.500316 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.500362 kubelet[3058]: W1009 01:04:34.500347 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.500362 kubelet[3058]: E1009 01:04:34.500365 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.501684 kubelet[3058]: E1009 01:04:34.501637 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.501684 kubelet[3058]: W1009 01:04:34.501660 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.501684 kubelet[3058]: E1009 01:04:34.501687 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.502347 kubelet[3058]: E1009 01:04:34.502133 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.502347 kubelet[3058]: W1009 01:04:34.502323 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.502957 kubelet[3058]: E1009 01:04:34.502883 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.504379 kubelet[3058]: E1009 01:04:34.504218 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.504379 kubelet[3058]: W1009 01:04:34.504240 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.504379 kubelet[3058]: E1009 01:04:34.504257 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.504987 kubelet[3058]: E1009 01:04:34.504799 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.504987 kubelet[3058]: W1009 01:04:34.504819 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.504987 kubelet[3058]: E1009 01:04:34.504834 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.505343 kubelet[3058]: E1009 01:04:34.505148 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.505343 kubelet[3058]: W1009 01:04:34.505179 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.505343 kubelet[3058]: E1009 01:04:34.505195 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.505445 kubelet[3058]: E1009 01:04:34.505413 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.505445 kubelet[3058]: W1009 01:04:34.505422 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.505445 kubelet[3058]: E1009 01:04:34.505433 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.505772 kubelet[3058]: E1009 01:04:34.505755 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.505772 kubelet[3058]: W1009 01:04:34.505771 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.505891 kubelet[3058]: E1009 01:04:34.505784 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.507330 kubelet[3058]: E1009 01:04:34.507305 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.507330 kubelet[3058]: W1009 01:04:34.507330 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.507600 kubelet[3058]: E1009 01:04:34.507349 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.508224 kubelet[3058]: E1009 01:04:34.508173 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.508224 kubelet[3058]: W1009 01:04:34.508193 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.508224 kubelet[3058]: E1009 01:04:34.508208 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.508789 kubelet[3058]: E1009 01:04:34.508473 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.508789 kubelet[3058]: W1009 01:04:34.508485 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.508789 kubelet[3058]: E1009 01:04:34.508501 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.509093 kubelet[3058]: E1009 01:04:34.509078 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.509093 kubelet[3058]: W1009 01:04:34.509092 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.509183 kubelet[3058]: E1009 01:04:34.509105 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.509645 kubelet[3058]: E1009 01:04:34.509235 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.509645 kubelet[3058]: W1009 01:04:34.509247 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.509645 kubelet[3058]: E1009 01:04:34.509257 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.509645 kubelet[3058]: E1009 01:04:34.509388 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.509645 kubelet[3058]: W1009 01:04:34.509395 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.509645 kubelet[3058]: E1009 01:04:34.509404 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.509645 kubelet[3058]: E1009 01:04:34.509628 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.509645 kubelet[3058]: W1009 01:04:34.509649 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.510561 kubelet[3058]: E1009 01:04:34.509660 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.510561 kubelet[3058]: E1009 01:04:34.509867 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.510561 kubelet[3058]: W1009 01:04:34.509874 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.510561 kubelet[3058]: E1009 01:04:34.509895 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.510561 kubelet[3058]: E1009 01:04:34.510123 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.510561 kubelet[3058]: W1009 01:04:34.510133 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.510561 kubelet[3058]: E1009 01:04:34.510147 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.510561 kubelet[3058]: E1009 01:04:34.510302 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.510561 kubelet[3058]: W1009 01:04:34.510310 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.510561 kubelet[3058]: E1009 01:04:34.510328 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512083 kubelet[3058]: E1009 01:04:34.510461 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512083 kubelet[3058]: W1009 01:04:34.510468 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512083 kubelet[3058]: E1009 01:04:34.510485 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512083 kubelet[3058]: E1009 01:04:34.510722 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512083 kubelet[3058]: W1009 01:04:34.510731 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512083 kubelet[3058]: E1009 01:04:34.510785 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512083 kubelet[3058]: E1009 01:04:34.511127 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512083 kubelet[3058]: W1009 01:04:34.511135 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512083 kubelet[3058]: E1009 01:04:34.511213 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512083 kubelet[3058]: E1009 01:04:34.511296 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512312 kubelet[3058]: W1009 01:04:34.511303 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512312 kubelet[3058]: E1009 01:04:34.511420 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512312 kubelet[3058]: E1009 01:04:34.511596 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512312 kubelet[3058]: W1009 01:04:34.511604 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512312 kubelet[3058]: E1009 01:04:34.511619 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512312 kubelet[3058]: E1009 01:04:34.511779 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512312 kubelet[3058]: W1009 01:04:34.511786 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512312 kubelet[3058]: E1009 01:04:34.511805 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512312 kubelet[3058]: E1009 01:04:34.511955 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512312 kubelet[3058]: W1009 01:04:34.511964 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512554 kubelet[3058]: E1009 01:04:34.511983 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.512554 kubelet[3058]: E1009 01:04:34.512317 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.512554 kubelet[3058]: W1009 01:04:34.512326 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.512554 kubelet[3058]: E1009 01:04:34.512351 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.514582 kubelet[3058]: E1009 01:04:34.512703 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.514582 kubelet[3058]: W1009 01:04:34.512720 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.514582 kubelet[3058]: E1009 01:04:34.512798 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.514582 kubelet[3058]: E1009 01:04:34.512900 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.514582 kubelet[3058]: W1009 01:04:34.512906 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.514582 kubelet[3058]: E1009 01:04:34.512917 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.514582 kubelet[3058]: E1009 01:04:34.513129 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.514582 kubelet[3058]: W1009 01:04:34.513137 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.514582 kubelet[3058]: E1009 01:04:34.513146 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.514582 kubelet[3058]: E1009 01:04:34.513274 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.515277 kubelet[3058]: W1009 01:04:34.513280 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.515277 kubelet[3058]: E1009 01:04:34.513288 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.515277 kubelet[3058]: E1009 01:04:34.513422 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.515277 kubelet[3058]: W1009 01:04:34.513429 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.515277 kubelet[3058]: E1009 01:04:34.513440 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.515277 kubelet[3058]: E1009 01:04:34.514034 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:34.515277 kubelet[3058]: W1009 01:04:34.514044 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:34.515277 kubelet[3058]: E1009 01:04:34.514063 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:34.716732 systemd[1]: run-containerd-runc-k8s.io-37ca49de4c98649cf2b1d059f08972e2b5e0539537c8370a0ce5d4625e5d01fc-runc.loUxPw.mount: Deactivated successfully. Oct 9 01:04:35.282879 kubelet[3058]: E1009 01:04:35.282792 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8w6fh" podUID="962372e4-80c6-4e39-87f0-b400601741aa" Oct 9 01:04:35.339538 containerd[1589]: time="2024-10-09T01:04:35.339178221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:35.343016 containerd[1589]: time="2024-10-09T01:04:35.342959310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 9 01:04:35.343992 containerd[1589]: time="2024-10-09T01:04:35.343959912Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:35.349459 containerd[1589]: time="2024-10-09T01:04:35.349348486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:35.350482 containerd[1589]: time="2024-10-09T01:04:35.350089447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.636764897s" Oct 9 01:04:35.350482 containerd[1589]: time="2024-10-09T01:04:35.350124927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 9 01:04:35.364059 containerd[1589]: time="2024-10-09T01:04:35.364016482Z" level=info msg="CreateContainer within sandbox \"a1d9cf9f8172d542a11772a537c35578423c3f1afaf0ae366c0e376edac4f0d1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:04:35.384899 containerd[1589]: time="2024-10-09T01:04:35.384688893Z" level=info msg="CreateContainer within sandbox \"a1d9cf9f8172d542a11772a537c35578423c3f1afaf0ae366c0e376edac4f0d1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4b080343d9423d6d7bc2b1d3236478d56dbc6ae20a137c0715896804dbed394b\"" Oct 9 01:04:35.385646 containerd[1589]: time="2024-10-09T01:04:35.385602295Z" level=info msg="StartContainer for \"4b080343d9423d6d7bc2b1d3236478d56dbc6ae20a137c0715896804dbed394b\"" Oct 9 01:04:35.412303 kubelet[3058]: I1009 01:04:35.412123 3058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:04:35.416720 kubelet[3058]: E1009 01:04:35.416671 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.416720 kubelet[3058]: W1009 01:04:35.416710 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.416720 kubelet[3058]: E1009 01:04:35.416731 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.417174 kubelet[3058]: E1009 01:04:35.417158 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.417174 kubelet[3058]: W1009 01:04:35.417173 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.417372 kubelet[3058]: E1009 01:04:35.417187 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.417409 kubelet[3058]: E1009 01:04:35.417373 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.417409 kubelet[3058]: W1009 01:04:35.417382 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.417409 kubelet[3058]: E1009 01:04:35.417393 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.417669 kubelet[3058]: E1009 01:04:35.417606 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.417669 kubelet[3058]: W1009 01:04:35.417619 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.417669 kubelet[3058]: E1009 01:04:35.417653 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.417949 kubelet[3058]: E1009 01:04:35.417908 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.417949 kubelet[3058]: W1009 01:04:35.417918 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.417949 kubelet[3058]: E1009 01:04:35.417948 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.418283 kubelet[3058]: E1009 01:04:35.418138 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.418283 kubelet[3058]: W1009 01:04:35.418150 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.418283 kubelet[3058]: E1009 01:04:35.418162 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.418417 kubelet[3058]: E1009 01:04:35.418374 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.418417 kubelet[3058]: W1009 01:04:35.418390 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.418417 kubelet[3058]: E1009 01:04:35.418401 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.418681 kubelet[3058]: E1009 01:04:35.418596 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.418681 kubelet[3058]: W1009 01:04:35.418624 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.418681 kubelet[3058]: E1009 01:04:35.418637 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.418901 kubelet[3058]: E1009 01:04:35.418853 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.418901 kubelet[3058]: W1009 01:04:35.418862 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.418901 kubelet[3058]: E1009 01:04:35.418875 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.419142 kubelet[3058]: E1009 01:04:35.419129 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.419142 kubelet[3058]: W1009 01:04:35.419141 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.419230 kubelet[3058]: E1009 01:04:35.419152 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.419486 kubelet[3058]: E1009 01:04:35.419435 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.419486 kubelet[3058]: W1009 01:04:35.419464 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.419554 kubelet[3058]: E1009 01:04:35.419492 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.419686 kubelet[3058]: E1009 01:04:35.419674 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.419686 kubelet[3058]: W1009 01:04:35.419686 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.419757 kubelet[3058]: E1009 01:04:35.419699 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.420055 kubelet[3058]: E1009 01:04:35.420040 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.420055 kubelet[3058]: W1009 01:04:35.420053 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.420055 kubelet[3058]: E1009 01:04:35.420064 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.420636 kubelet[3058]: E1009 01:04:35.420283 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.420636 kubelet[3058]: W1009 01:04:35.420310 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.420636 kubelet[3058]: E1009 01:04:35.420323 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.420636 kubelet[3058]: E1009 01:04:35.420532 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.420636 kubelet[3058]: W1009 01:04:35.420542 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.420636 kubelet[3058]: E1009 01:04:35.420554 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.420856 kubelet[3058]: E1009 01:04:35.420776 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.420856 kubelet[3058]: W1009 01:04:35.420785 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.420856 kubelet[3058]: E1009 01:04:35.420795 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.421045 kubelet[3058]: E1009 01:04:35.420997 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.421045 kubelet[3058]: W1009 01:04:35.421036 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.421141 kubelet[3058]: E1009 01:04:35.421057 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.421267 kubelet[3058]: E1009 01:04:35.421250 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.421267 kubelet[3058]: W1009 01:04:35.421263 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.421267 kubelet[3058]: E1009 01:04:35.421278 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.421454 kubelet[3058]: E1009 01:04:35.421442 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.421454 kubelet[3058]: W1009 01:04:35.421452 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.421698 kubelet[3058]: E1009 01:04:35.421473 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.421698 kubelet[3058]: E1009 01:04:35.421622 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.421698 kubelet[3058]: W1009 01:04:35.421630 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.421698 kubelet[3058]: E1009 01:04:35.421645 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.421837 kubelet[3058]: E1009 01:04:35.421766 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.421837 kubelet[3058]: W1009 01:04:35.421772 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.421837 kubelet[3058]: E1009 01:04:35.421790 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.421984 kubelet[3058]: E1009 01:04:35.421971 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.421984 kubelet[3058]: W1009 01:04:35.421981 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.422059 kubelet[3058]: E1009 01:04:35.422000 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.422430 kubelet[3058]: E1009 01:04:35.422215 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.422430 kubelet[3058]: W1009 01:04:35.422230 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.422430 kubelet[3058]: E1009 01:04:35.422242 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.422609 kubelet[3058]: E1009 01:04:35.422437 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.422609 kubelet[3058]: W1009 01:04:35.422446 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.422609 kubelet[3058]: E1009 01:04:35.422457 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.424303 kubelet[3058]: E1009 01:04:35.423904 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.424303 kubelet[3058]: W1009 01:04:35.423936 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.424303 kubelet[3058]: E1009 01:04:35.423950 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.424303 kubelet[3058]: E1009 01:04:35.424152 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.424303 kubelet[3058]: W1009 01:04:35.424161 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.425817 kubelet[3058]: E1009 01:04:35.424625 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.425817 kubelet[3058]: E1009 01:04:35.424764 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.425817 kubelet[3058]: W1009 01:04:35.424775 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.425817 kubelet[3058]: E1009 01:04:35.424789 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.436666 kubelet[3058]: E1009 01:04:35.436315 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.436817 kubelet[3058]: W1009 01:04:35.436799 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.436893 kubelet[3058]: E1009 01:04:35.436883 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.437232 kubelet[3058]: E1009 01:04:35.437218 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.437322 kubelet[3058]: W1009 01:04:35.437310 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.437379 kubelet[3058]: E1009 01:04:35.437371 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.437817 kubelet[3058]: E1009 01:04:35.437800 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.438156 kubelet[3058]: W1009 01:04:35.438133 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.439030 kubelet[3058]: E1009 01:04:35.438829 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.439254 kubelet[3058]: E1009 01:04:35.439241 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.439343 kubelet[3058]: W1009 01:04:35.439330 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.439410 kubelet[3058]: E1009 01:04:35.439400 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.440322 kubelet[3058]: E1009 01:04:35.440306 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.440421 kubelet[3058]: W1009 01:04:35.440399 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.440480 kubelet[3058]: E1009 01:04:35.440471 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.443572 kubelet[3058]: E1009 01:04:35.443550 3058 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:04:35.443695 kubelet[3058]: W1009 01:04:35.443681 3058 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:04:35.443962 kubelet[3058]: E1009 01:04:35.443745 3058 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:04:35.473322 containerd[1589]: time="2024-10-09T01:04:35.473277071Z" level=info msg="StartContainer for \"4b080343d9423d6d7bc2b1d3236478d56dbc6ae20a137c0715896804dbed394b\" returns successfully" Oct 9 01:04:35.529720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b080343d9423d6d7bc2b1d3236478d56dbc6ae20a137c0715896804dbed394b-rootfs.mount: Deactivated successfully. Oct 9 01:04:35.624315 containerd[1589]: time="2024-10-09T01:04:35.624244403Z" level=info msg="shim disconnected" id=4b080343d9423d6d7bc2b1d3236478d56dbc6ae20a137c0715896804dbed394b namespace=k8s.io Oct 9 01:04:35.625972 containerd[1589]: time="2024-10-09T01:04:35.624704284Z" level=warning msg="cleaning up after shim disconnected" id=4b080343d9423d6d7bc2b1d3236478d56dbc6ae20a137c0715896804dbed394b namespace=k8s.io Oct 9 01:04:35.625972 containerd[1589]: time="2024-10-09T01:04:35.624734884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:04:36.420497 containerd[1589]: time="2024-10-09T01:04:36.419248916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:04:36.456977 kubelet[3058]: I1009 01:04:36.456942 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6b45c46b5d-cfkrf" podStartSLOduration=4.043855268 podStartE2EDuration="6.455633725s" podCreationTimestamp="2024-10-09 01:04:30 +0000 UTC" firstStartedPulling="2024-10-09 01:04:31.299377488 +0000 UTC m=+20.170005283" lastFinishedPulling="2024-10-09 01:04:33.711155905 +0000 UTC m=+22.581783740" observedRunningTime="2024-10-09 01:04:34.435288345 +0000 UTC m=+23.305916180" watchObservedRunningTime="2024-10-09 01:04:36.455633725 +0000 UTC m=+25.326261560" Oct 9 01:04:37.283100 kubelet[3058]: E1009 01:04:37.282782 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8w6fh" podUID="962372e4-80c6-4e39-87f0-b400601741aa" Oct 9 01:04:39.286263 kubelet[3058]: E1009 01:04:39.285085 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8w6fh" podUID="962372e4-80c6-4e39-87f0-b400601741aa" Oct 9 01:04:40.587058 containerd[1589]: time="2024-10-09T01:04:40.586938809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:40.588777 containerd[1589]: time="2024-10-09T01:04:40.588157172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 9 01:04:40.588777 containerd[1589]: time="2024-10-09T01:04:40.588561173Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:40.590773 containerd[1589]: time="2024-10-09T01:04:40.590705458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:40.593690 containerd[1589]: time="2024-10-09T01:04:40.591692180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.171098741s" Oct 9 01:04:40.593690 containerd[1589]: time="2024-10-09T01:04:40.591728101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 9 01:04:40.596147 containerd[1589]: time="2024-10-09T01:04:40.596110071Z" level=info msg="CreateContainer within sandbox \"a1d9cf9f8172d542a11772a537c35578423c3f1afaf0ae366c0e376edac4f0d1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:04:40.611886 containerd[1589]: time="2024-10-09T01:04:40.611842109Z" level=info msg="CreateContainer within sandbox \"a1d9cf9f8172d542a11772a537c35578423c3f1afaf0ae366c0e376edac4f0d1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d0cdd049127e9a4a9f1d35f110e0b1c9dd16a7478a7d501307e7da765a3cfabd\"" Oct 9 01:04:40.614194 containerd[1589]: time="2024-10-09T01:04:40.614127314Z" level=info msg="StartContainer for \"d0cdd049127e9a4a9f1d35f110e0b1c9dd16a7478a7d501307e7da765a3cfabd\"" Oct 9 01:04:40.678574 containerd[1589]: time="2024-10-09T01:04:40.678461988Z" level=info msg="StartContainer for \"d0cdd049127e9a4a9f1d35f110e0b1c9dd16a7478a7d501307e7da765a3cfabd\" returns successfully" Oct 9 01:04:41.129695 containerd[1589]: time="2024-10-09T01:04:41.129636627Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:04:41.139618 kubelet[3058]: I1009 01:04:41.139556 3058 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:04:41.162892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0cdd049127e9a4a9f1d35f110e0b1c9dd16a7478a7d501307e7da765a3cfabd-rootfs.mount: Deactivated successfully. Oct 9 01:04:41.177959 kubelet[3058]: I1009 01:04:41.172114 3058 topology_manager.go:215] "Topology Admit Handler" podUID="7aafddbc-ac54-42e6-b889-af97eef6990a" podNamespace="kube-system" podName="coredns-76f75df574-9p5pp" Oct 9 01:04:41.180889 kubelet[3058]: I1009 01:04:41.180272 3058 topology_manager.go:215] "Topology Admit Handler" podUID="50252bd4-6374-4684-812d-2492dbd150ac" podNamespace="kube-system" podName="coredns-76f75df574-9ssv4" Oct 9 01:04:41.180889 kubelet[3058]: I1009 01:04:41.180449 3058 topology_manager.go:215] "Topology Admit Handler" podUID="bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3" podNamespace="calico-system" podName="calico-kube-controllers-77fc9fbf75-p4vjf" Oct 9 01:04:41.268183 kubelet[3058]: I1009 01:04:41.268112 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wplq6\" (UniqueName: \"kubernetes.io/projected/bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3-kube-api-access-wplq6\") pod \"calico-kube-controllers-77fc9fbf75-p4vjf\" (UID: \"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3\") " pod="calico-system/calico-kube-controllers-77fc9fbf75-p4vjf" Oct 9 01:04:41.268183 kubelet[3058]: I1009 01:04:41.268163 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aafddbc-ac54-42e6-b889-af97eef6990a-config-volume\") pod \"coredns-76f75df574-9p5pp\" (UID: \"7aafddbc-ac54-42e6-b889-af97eef6990a\") " pod="kube-system/coredns-76f75df574-9p5pp" Oct 9 01:04:41.268183 kubelet[3058]: I1009 01:04:41.268190 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50252bd4-6374-4684-812d-2492dbd150ac-config-volume\") pod \"coredns-76f75df574-9ssv4\" (UID: \"50252bd4-6374-4684-812d-2492dbd150ac\") " pod="kube-system/coredns-76f75df574-9ssv4" Oct 9 01:04:41.269377 kubelet[3058]: I1009 01:04:41.268210 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8wbp\" (UniqueName: \"kubernetes.io/projected/50252bd4-6374-4684-812d-2492dbd150ac-kube-api-access-g8wbp\") pod \"coredns-76f75df574-9ssv4\" (UID: \"50252bd4-6374-4684-812d-2492dbd150ac\") " pod="kube-system/coredns-76f75df574-9ssv4" Oct 9 01:04:41.269377 kubelet[3058]: I1009 01:04:41.268232 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3-tigera-ca-bundle\") pod \"calico-kube-controllers-77fc9fbf75-p4vjf\" (UID: \"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3\") " pod="calico-system/calico-kube-controllers-77fc9fbf75-p4vjf" Oct 9 01:04:41.269377 kubelet[3058]: I1009 01:04:41.268258 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l9lw\" (UniqueName: \"kubernetes.io/projected/7aafddbc-ac54-42e6-b889-af97eef6990a-kube-api-access-7l9lw\") pod \"coredns-76f75df574-9p5pp\" (UID: \"7aafddbc-ac54-42e6-b889-af97eef6990a\") " pod="kube-system/coredns-76f75df574-9p5pp" Oct 9 01:04:41.277092 containerd[1589]: time="2024-10-09T01:04:41.277013418Z" level=info msg="shim disconnected" id=d0cdd049127e9a4a9f1d35f110e0b1c9dd16a7478a7d501307e7da765a3cfabd namespace=k8s.io Oct 9 01:04:41.277092 containerd[1589]: time="2024-10-09T01:04:41.277086179Z" level=warning msg="cleaning up after shim disconnected" id=d0cdd049127e9a4a9f1d35f110e0b1c9dd16a7478a7d501307e7da765a3cfabd namespace=k8s.io Oct 9 01:04:41.277408 containerd[1589]: time="2024-10-09T01:04:41.277094619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:04:41.289272 containerd[1589]: time="2024-10-09T01:04:41.289226888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8w6fh,Uid:962372e4-80c6-4e39-87f0-b400601741aa,Namespace:calico-system,Attempt:0,}" Oct 9 01:04:41.434402 containerd[1589]: time="2024-10-09T01:04:41.433344711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:04:41.455657 containerd[1589]: time="2024-10-09T01:04:41.455496124Z" level=error msg="Failed to destroy network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.458213 containerd[1589]: time="2024-10-09T01:04:41.458170330Z" level=error msg="encountered an error cleaning up failed sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.458453 containerd[1589]: time="2024-10-09T01:04:41.458344130Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8w6fh,Uid:962372e4-80c6-4e39-87f0-b400601741aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.459722 kubelet[3058]: E1009 01:04:41.458666 3058 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.459722 kubelet[3058]: E1009 01:04:41.458727 3058 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8w6fh" Oct 9 01:04:41.459722 kubelet[3058]: E1009 01:04:41.458750 3058 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8w6fh" Oct 9 01:04:41.459879 kubelet[3058]: E1009 01:04:41.458807 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8w6fh_calico-system(962372e4-80c6-4e39-87f0-b400601741aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8w6fh_calico-system(962372e4-80c6-4e39-87f0-b400601741aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8w6fh" podUID="962372e4-80c6-4e39-87f0-b400601741aa" Oct 9 01:04:41.499601 containerd[1589]: time="2024-10-09T01:04:41.499563069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p5pp,Uid:7aafddbc-ac54-42e6-b889-af97eef6990a,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:41.508131 containerd[1589]: time="2024-10-09T01:04:41.508090569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9ssv4,Uid:50252bd4-6374-4684-812d-2492dbd150ac,Namespace:kube-system,Attempt:0,}" Oct 9 01:04:41.509106 containerd[1589]: time="2024-10-09T01:04:41.509076811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fc9fbf75-p4vjf,Uid:bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3,Namespace:calico-system,Attempt:0,}" Oct 9 01:04:41.605167 containerd[1589]: time="2024-10-09T01:04:41.605054520Z" level=error msg="Failed to destroy network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.608531 containerd[1589]: time="2024-10-09T01:04:41.607559686Z" level=error msg="encountered an error cleaning up failed sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.613817 containerd[1589]: time="2024-10-09T01:04:41.613776541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9ssv4,Uid:50252bd4-6374-4684-812d-2492dbd150ac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.614186 kubelet[3058]: E1009 01:04:41.614150 3058 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.614281 kubelet[3058]: E1009 01:04:41.614209 3058 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9ssv4" Oct 9 01:04:41.614281 kubelet[3058]: E1009 01:04:41.614229 3058 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9ssv4" Oct 9 01:04:41.614281 kubelet[3058]: E1009 01:04:41.614279 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9ssv4_kube-system(50252bd4-6374-4684-812d-2492dbd150ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9ssv4_kube-system(50252bd4-6374-4684-812d-2492dbd150ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9ssv4" podUID="50252bd4-6374-4684-812d-2492dbd150ac" Oct 9 01:04:41.618281 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc-shm.mount: Deactivated successfully. Oct 9 01:04:41.624708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc-shm.mount: Deactivated successfully. Oct 9 01:04:41.628466 containerd[1589]: time="2024-10-09T01:04:41.628418136Z" level=error msg="Failed to destroy network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.631546 containerd[1589]: time="2024-10-09T01:04:41.631413943Z" level=error msg="encountered an error cleaning up failed sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.631712 containerd[1589]: time="2024-10-09T01:04:41.631690384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p5pp,Uid:7aafddbc-ac54-42e6-b889-af97eef6990a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.632358 kubelet[3058]: E1009 01:04:41.632331 3058 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.632802 kubelet[3058]: E1009 01:04:41.632690 3058 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9p5pp" Oct 9 01:04:41.632945 kubelet[3058]: E1009 01:04:41.632914 3058 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-9p5pp" Oct 9 01:04:41.633175 kubelet[3058]: E1009 01:04:41.633150 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-9p5pp_kube-system(7aafddbc-ac54-42e6-b889-af97eef6990a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-9p5pp_kube-system(7aafddbc-ac54-42e6-b889-af97eef6990a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9p5pp" podUID="7aafddbc-ac54-42e6-b889-af97eef6990a" Oct 9 01:04:41.635132 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795-shm.mount: Deactivated successfully. Oct 9 01:04:41.648966 containerd[1589]: time="2024-10-09T01:04:41.648849344Z" level=error msg="Failed to destroy network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.651245 containerd[1589]: time="2024-10-09T01:04:41.650040107Z" level=error msg="encountered an error cleaning up failed sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.651245 containerd[1589]: time="2024-10-09T01:04:41.650116467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fc9fbf75-p4vjf,Uid:bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.651385 kubelet[3058]: E1009 01:04:41.651071 3058 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:41.651385 kubelet[3058]: E1009 01:04:41.651123 3058 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fc9fbf75-p4vjf" Oct 9 01:04:41.651385 kubelet[3058]: E1009 01:04:41.651141 3058 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77fc9fbf75-p4vjf" Oct 9 01:04:41.651488 kubelet[3058]: E1009 01:04:41.651199 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77fc9fbf75-p4vjf_calico-system(bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77fc9fbf75-p4vjf_calico-system(bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77fc9fbf75-p4vjf" podUID="bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3" Oct 9 01:04:41.651797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad-shm.mount: Deactivated successfully. Oct 9 01:04:42.435082 kubelet[3058]: I1009 01:04:42.434703 3058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:04:42.437123 containerd[1589]: time="2024-10-09T01:04:42.436325175Z" level=info msg="StopPodSandbox for \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\"" Oct 9 01:04:42.437123 containerd[1589]: time="2024-10-09T01:04:42.436757056Z" level=info msg="Ensure that sandbox 2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad in task-service has been cleanup successfully" Oct 9 01:04:42.443274 kubelet[3058]: I1009 01:04:42.443018 3058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:04:42.445530 containerd[1589]: time="2024-10-09T01:04:42.445464877Z" level=info msg="StopPodSandbox for \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\"" Oct 9 01:04:42.446234 containerd[1589]: time="2024-10-09T01:04:42.445707758Z" level=info msg="Ensure that sandbox fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc in task-service has been cleanup successfully" Oct 9 01:04:42.449948 kubelet[3058]: I1009 01:04:42.449865 3058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:04:42.451519 containerd[1589]: time="2024-10-09T01:04:42.451353451Z" level=info msg="StopPodSandbox for \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\"" Oct 9 01:04:42.451898 containerd[1589]: time="2024-10-09T01:04:42.451851092Z" level=info msg="Ensure that sandbox f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc in task-service has been cleanup successfully" Oct 9 01:04:42.456952 kubelet[3058]: I1009 01:04:42.456462 3058 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:04:42.460841 containerd[1589]: time="2024-10-09T01:04:42.460799153Z" level=info msg="StopPodSandbox for \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\"" Oct 9 01:04:42.462567 containerd[1589]: time="2024-10-09T01:04:42.462526357Z" level=info msg="Ensure that sandbox 691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795 in task-service has been cleanup successfully" Oct 9 01:04:42.502786 containerd[1589]: time="2024-10-09T01:04:42.502743693Z" level=error msg="StopPodSandbox for \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\" failed" error="failed to destroy network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:42.503501 kubelet[3058]: E1009 01:04:42.503144 3058 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:04:42.503501 kubelet[3058]: E1009 01:04:42.503236 3058 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc"} Oct 9 01:04:42.503501 kubelet[3058]: E1009 01:04:42.503271 3058 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"962372e4-80c6-4e39-87f0-b400601741aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:04:42.503501 kubelet[3058]: E1009 01:04:42.503309 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"962372e4-80c6-4e39-87f0-b400601741aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8w6fh" podUID="962372e4-80c6-4e39-87f0-b400601741aa" Oct 9 01:04:42.507225 containerd[1589]: time="2024-10-09T01:04:42.507170303Z" level=error msg="StopPodSandbox for \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\" failed" error="failed to destroy network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:42.507540 kubelet[3058]: E1009 01:04:42.507508 3058 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:04:42.507625 kubelet[3058]: E1009 01:04:42.507610 3058 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad"} Oct 9 01:04:42.507960 kubelet[3058]: E1009 01:04:42.507932 3058 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:04:42.508039 kubelet[3058]: E1009 01:04:42.507986 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77fc9fbf75-p4vjf" podUID="bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3" Oct 9 01:04:42.514402 containerd[1589]: time="2024-10-09T01:04:42.514346400Z" level=error msg="StopPodSandbox for \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\" failed" error="failed to destroy network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:42.514674 kubelet[3058]: E1009 01:04:42.514636 3058 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:04:42.514739 kubelet[3058]: E1009 01:04:42.514684 3058 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc"} Oct 9 01:04:42.514739 kubelet[3058]: E1009 01:04:42.514719 3058 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50252bd4-6374-4684-812d-2492dbd150ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:04:42.514818 kubelet[3058]: E1009 01:04:42.514750 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50252bd4-6374-4684-812d-2492dbd150ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9ssv4" podUID="50252bd4-6374-4684-812d-2492dbd150ac" Oct 9 01:04:42.518984 containerd[1589]: time="2024-10-09T01:04:42.518936531Z" level=error msg="StopPodSandbox for \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\" failed" error="failed to destroy network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:04:42.519259 kubelet[3058]: E1009 01:04:42.519232 3058 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:04:42.519310 kubelet[3058]: E1009 01:04:42.519281 3058 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795"} Oct 9 01:04:42.519335 kubelet[3058]: E1009 01:04:42.519315 3058 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7aafddbc-ac54-42e6-b889-af97eef6990a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:04:42.519387 kubelet[3058]: E1009 01:04:42.519343 3058 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7aafddbc-ac54-42e6-b889-af97eef6990a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-9p5pp" podUID="7aafddbc-ac54-42e6-b889-af97eef6990a" Oct 9 01:04:47.157067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550023259.mount: Deactivated successfully. Oct 9 01:04:47.182315 containerd[1589]: time="2024-10-09T01:04:47.181484220Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:47.183029 containerd[1589]: time="2024-10-09T01:04:47.182564982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 9 01:04:47.183613 containerd[1589]: time="2024-10-09T01:04:47.183576984Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:47.188958 containerd[1589]: time="2024-10-09T01:04:47.188884317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:47.190918 containerd[1589]: time="2024-10-09T01:04:47.190849761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 5.756297327s" Oct 9 01:04:47.190918 containerd[1589]: time="2024-10-09T01:04:47.190899121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 9 01:04:47.217404 containerd[1589]: time="2024-10-09T01:04:47.217262142Z" level=info msg="CreateContainer within sandbox \"a1d9cf9f8172d542a11772a537c35578423c3f1afaf0ae366c0e376edac4f0d1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:04:47.235399 containerd[1589]: time="2024-10-09T01:04:47.235075984Z" level=info msg="CreateContainer within sandbox \"a1d9cf9f8172d542a11772a537c35578423c3f1afaf0ae366c0e376edac4f0d1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd8f6452c4a68eeac9a42e926966238c1781ff262579265a83681de95b8b60c2\"" Oct 9 01:04:47.237996 containerd[1589]: time="2024-10-09T01:04:47.237641710Z" level=info msg="StartContainer for \"bd8f6452c4a68eeac9a42e926966238c1781ff262579265a83681de95b8b60c2\"" Oct 9 01:04:47.309896 containerd[1589]: time="2024-10-09T01:04:47.309835077Z" level=info msg="StartContainer for \"bd8f6452c4a68eeac9a42e926966238c1781ff262579265a83681de95b8b60c2\" returns successfully" Oct 9 01:04:47.476948 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:04:47.477066 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:04:47.493553 kubelet[3058]: I1009 01:04:47.493032 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-gm4ng" podStartSLOduration=1.679462265 podStartE2EDuration="17.492984301s" podCreationTimestamp="2024-10-09 01:04:30 +0000 UTC" firstStartedPulling="2024-10-09 01:04:31.378599408 +0000 UTC m=+20.249227203" lastFinishedPulling="2024-10-09 01:04:47.192121404 +0000 UTC m=+36.062749239" observedRunningTime="2024-10-09 01:04:47.492267539 +0000 UTC m=+36.362895374" watchObservedRunningTime="2024-10-09 01:04:47.492984301 +0000 UTC m=+36.363612136" Oct 9 01:04:48.477083 kubelet[3058]: I1009 01:04:48.477013 3058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:04:51.658789 kubelet[3058]: I1009 01:04:51.658514 3058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:04:54.287093 containerd[1589]: time="2024-10-09T01:04:54.286160831Z" level=info msg="StopPodSandbox for \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\"" Oct 9 01:04:54.287093 containerd[1589]: time="2024-10-09T01:04:54.286211671Z" level=info msg="StopPodSandbox for \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\"" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.392 [INFO][4323] k8s.go 608: Cleaning up netns ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.392 [INFO][4323] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" iface="eth0" netns="/var/run/netns/cni-b028de65-c448-fc40-8c3c-41b67b0c2189" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.393 [INFO][4323] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" iface="eth0" netns="/var/run/netns/cni-b028de65-c448-fc40-8c3c-41b67b0c2189" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.394 [INFO][4323] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" iface="eth0" netns="/var/run/netns/cni-b028de65-c448-fc40-8c3c-41b67b0c2189" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.394 [INFO][4323] k8s.go 615: Releasing IP address(es) ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.394 [INFO][4323] utils.go 188: Calico CNI releasing IP address ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.472 [INFO][4344] ipam_plugin.go 417: Releasing address using handleID ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.473 [INFO][4344] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.473 [INFO][4344] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.487 [WARNING][4344] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.487 [INFO][4344] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.491 [INFO][4344] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:54.500672 containerd[1589]: 2024-10-09 01:04:54.496 [INFO][4323] k8s.go 621: Teardown processing complete. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:04:54.503580 containerd[1589]: time="2024-10-09T01:04:54.503382743Z" level=info msg="TearDown network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\" successfully" Oct 9 01:04:54.503580 containerd[1589]: time="2024-10-09T01:04:54.503419104Z" level=info msg="StopPodSandbox for \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\" returns successfully" Oct 9 01:04:54.504010 systemd[1]: run-netns-cni\x2db028de65\x2dc448\x2dfc40\x2d8c3c\x2d41b67b0c2189.mount: Deactivated successfully. Oct 9 01:04:54.508095 containerd[1589]: time="2024-10-09T01:04:54.508062049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fc9fbf75-p4vjf,Uid:bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3,Namespace:calico-system,Attempt:1,}" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.382 [INFO][4319] k8s.go 608: Cleaning up netns ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.382 [INFO][4319] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" iface="eth0" netns="/var/run/netns/cni-02780bae-9100-54c3-9c8a-8a16e6c6eb06" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.382 [INFO][4319] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" iface="eth0" netns="/var/run/netns/cni-02780bae-9100-54c3-9c8a-8a16e6c6eb06" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.383 [INFO][4319] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" iface="eth0" netns="/var/run/netns/cni-02780bae-9100-54c3-9c8a-8a16e6c6eb06" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.383 [INFO][4319] k8s.go 615: Releasing IP address(es) ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.383 [INFO][4319] utils.go 188: Calico CNI releasing IP address ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.472 [INFO][4343] ipam_plugin.go 417: Releasing address using handleID ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.474 [INFO][4343] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.491 [INFO][4343] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.514 [WARNING][4343] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.514 [INFO][4343] ipam_plugin.go 445: Releasing address using workloadID ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.516 [INFO][4343] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:54.524177 containerd[1589]: 2024-10-09 01:04:54.519 [INFO][4319] k8s.go 621: Teardown processing complete. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:04:54.531885 systemd[1]: run-netns-cni\x2d02780bae\x2d9100\x2d54c3\x2d9c8a\x2d8a16e6c6eb06.mount: Deactivated successfully. Oct 9 01:04:54.534300 containerd[1589]: time="2024-10-09T01:04:54.534244913Z" level=info msg="TearDown network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\" successfully" Oct 9 01:04:54.534804 containerd[1589]: time="2024-10-09T01:04:54.534375754Z" level=info msg="StopPodSandbox for \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\" returns successfully" Oct 9 01:04:54.535559 containerd[1589]: time="2024-10-09T01:04:54.535522880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p5pp,Uid:7aafddbc-ac54-42e6-b889-af97eef6990a,Namespace:kube-system,Attempt:1,}" Oct 9 01:04:54.695142 systemd-networkd[1255]: cali9fa114acc1a: Link UP Oct 9 01:04:54.697311 systemd-networkd[1255]: cali9fa114acc1a: Gained carrier Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.564 [INFO][4357] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.581 [INFO][4357] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0 calico-kube-controllers-77fc9fbf75- calico-system bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3 710 0 2024-10-09 01:04:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77fc9fbf75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4116-0-0-5-47b5cb1617 calico-kube-controllers-77fc9fbf75-p4vjf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9fa114acc1a [] []}} ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.581 [INFO][4357] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.625 [INFO][4380] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" HandleID="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.637 [INFO][4380] ipam_plugin.go 270: Auto assigning IP ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" HandleID="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003180a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-5-47b5cb1617", "pod":"calico-kube-controllers-77fc9fbf75-p4vjf", "timestamp":"2024-10-09 01:04:54.625202452 +0000 UTC"}, Hostname:"ci-4116-0-0-5-47b5cb1617", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.637 [INFO][4380] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.637 [INFO][4380] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.637 [INFO][4380] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-5-47b5cb1617' Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.639 [INFO][4380] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.645 [INFO][4380] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.652 [INFO][4380] ipam.go 489: Trying affinity for 192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.657 [INFO][4380] ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.659 [INFO][4380] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.660 [INFO][4380] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.661 [INFO][4380] ipam.go 1685: Creating new handle: k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.670 [INFO][4380] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.676 [INFO][4380] ipam.go 1216: Successfully claimed IPs: [192.168.22.1/26] block=192.168.22.0/26 handle="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.676 [INFO][4380] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.1/26] handle="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.676 [INFO][4380] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:54.726956 containerd[1589]: 2024-10-09 01:04:54.676 [INFO][4380] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.1/26] IPv6=[] ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" HandleID="k8s-pod-network.1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.727495 containerd[1589]: 2024-10-09 01:04:54.679 [INFO][4357] k8s.go 386: Populated endpoint ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0", GenerateName:"calico-kube-controllers-77fc9fbf75-", Namespace:"calico-system", SelfLink:"", UID:"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fc9fbf75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"", Pod:"calico-kube-controllers-77fc9fbf75-p4vjf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fa114acc1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:54.727495 containerd[1589]: 2024-10-09 01:04:54.679 [INFO][4357] k8s.go 387: Calico CNI using IPs: [192.168.22.1/32] ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.727495 containerd[1589]: 2024-10-09 01:04:54.679 [INFO][4357] dataplane_linux.go 68: Setting the host side veth name to cali9fa114acc1a ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.727495 containerd[1589]: 2024-10-09 01:04:54.694 [INFO][4357] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.727495 containerd[1589]: 2024-10-09 01:04:54.697 [INFO][4357] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0", GenerateName:"calico-kube-controllers-77fc9fbf75-", Namespace:"calico-system", SelfLink:"", UID:"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fc9fbf75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e", Pod:"calico-kube-controllers-77fc9fbf75-p4vjf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fa114acc1a", MAC:"52:e2:2b:12:1f:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:54.727495 containerd[1589]: 2024-10-09 01:04:54.713 [INFO][4357] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e" Namespace="calico-system" Pod="calico-kube-controllers-77fc9fbf75-p4vjf" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:04:54.765117 systemd-networkd[1255]: cali71c582822fb: Link UP Oct 9 01:04:54.765323 systemd-networkd[1255]: cali71c582822fb: Gained carrier Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.588 [INFO][4367] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.608 [INFO][4367] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0 coredns-76f75df574- kube-system 7aafddbc-ac54-42e6-b889-af97eef6990a 709 0 2024-10-09 01:04:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-5-47b5cb1617 coredns-76f75df574-9p5pp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71c582822fb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.608 [INFO][4367] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.646 [INFO][4385] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" HandleID="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.661 [INFO][4385] ipam_plugin.go 270: Auto assigning IP ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" HandleID="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001169e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-5-47b5cb1617", "pod":"coredns-76f75df574-9p5pp", "timestamp":"2024-10-09 01:04:54.64669377 +0000 UTC"}, Hostname:"ci-4116-0-0-5-47b5cb1617", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.661 [INFO][4385] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.677 [INFO][4385] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.677 [INFO][4385] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-5-47b5cb1617' Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.680 [INFO][4385] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.688 [INFO][4385] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.697 [INFO][4385] ipam.go 489: Trying affinity for 192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.701 [INFO][4385] ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.706 [INFO][4385] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.706 [INFO][4385] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.708 [INFO][4385] ipam.go 1685: Creating new handle: k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39 Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.715 [INFO][4385] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.738 [INFO][4385] ipam.go 1216: Successfully claimed IPs: [192.168.22.2/26] block=192.168.22.0/26 handle="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.738 [INFO][4385] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.2/26] handle="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.738 [INFO][4385] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:54.803196 containerd[1589]: 2024-10-09 01:04:54.738 [INFO][4385] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.2/26] IPv6=[] ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" HandleID="k8s-pod-network.7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.803757 containerd[1589]: 2024-10-09 01:04:54.748 [INFO][4367] k8s.go 386: Populated endpoint ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7aafddbc-ac54-42e6-b889-af97eef6990a", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"", Pod:"coredns-76f75df574-9p5pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71c582822fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:54.803757 containerd[1589]: 2024-10-09 01:04:54.748 [INFO][4367] k8s.go 387: Calico CNI using IPs: [192.168.22.2/32] ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.803757 containerd[1589]: 2024-10-09 01:04:54.748 [INFO][4367] dataplane_linux.go 68: Setting the host side veth name to cali71c582822fb ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.803757 containerd[1589]: 2024-10-09 01:04:54.766 [INFO][4367] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.803757 containerd[1589]: 2024-10-09 01:04:54.768 [INFO][4367] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7aafddbc-ac54-42e6-b889-af97eef6990a", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39", Pod:"coredns-76f75df574-9p5pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71c582822fb", MAC:"16:3e:be:68:1a:39", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:54.803757 containerd[1589]: 2024-10-09 01:04:54.798 [INFO][4367] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39" Namespace="kube-system" Pod="coredns-76f75df574-9p5pp" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:04:54.813732 containerd[1589]: time="2024-10-09T01:04:54.813142524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:54.813732 containerd[1589]: time="2024-10-09T01:04:54.813700887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:54.813732 containerd[1589]: time="2024-10-09T01:04:54.813712407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:54.814065 containerd[1589]: time="2024-10-09T01:04:54.814008449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:54.836256 containerd[1589]: time="2024-10-09T01:04:54.836161491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:54.836837 containerd[1589]: time="2024-10-09T01:04:54.836799214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:54.837030 containerd[1589]: time="2024-10-09T01:04:54.837000095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:54.837219 containerd[1589]: time="2024-10-09T01:04:54.837189296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:54.875383 containerd[1589]: time="2024-10-09T01:04:54.875341866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77fc9fbf75-p4vjf,Uid:bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3,Namespace:calico-system,Attempt:1,} returns sandbox id \"1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e\"" Oct 9 01:04:54.882259 containerd[1589]: time="2024-10-09T01:04:54.882206823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:04:54.897658 containerd[1589]: time="2024-10-09T01:04:54.897621988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9p5pp,Uid:7aafddbc-ac54-42e6-b889-af97eef6990a,Namespace:kube-system,Attempt:1,} returns sandbox id \"7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39\"" Oct 9 01:04:54.901997 containerd[1589]: time="2024-10-09T01:04:54.901959252Z" level=info msg="CreateContainer within sandbox \"7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:04:54.929468 containerd[1589]: time="2024-10-09T01:04:54.929373442Z" level=info msg="CreateContainer within sandbox \"7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c02b161c2644085ff98bb7cc52df95ed82e86612becf47736401dd198be5124e\"" Oct 9 01:04:54.933109 containerd[1589]: time="2024-10-09T01:04:54.930244607Z" level=info msg="StartContainer for \"c02b161c2644085ff98bb7cc52df95ed82e86612becf47736401dd198be5124e\"" Oct 9 01:04:54.994187 containerd[1589]: time="2024-10-09T01:04:54.991973586Z" level=info msg="StartContainer for \"c02b161c2644085ff98bb7cc52df95ed82e86612becf47736401dd198be5124e\" returns successfully" Oct 9 01:04:55.294474 containerd[1589]: time="2024-10-09T01:04:55.294356472Z" level=info msg="StopPodSandbox for \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\"" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.346 [INFO][4553] k8s.go 608: Cleaning up netns ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.347 [INFO][4553] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" iface="eth0" netns="/var/run/netns/cni-721c15c8-f319-3488-1bf7-755cd00ab972" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.347 [INFO][4553] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" iface="eth0" netns="/var/run/netns/cni-721c15c8-f319-3488-1bf7-755cd00ab972" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.348 [INFO][4553] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" iface="eth0" netns="/var/run/netns/cni-721c15c8-f319-3488-1bf7-755cd00ab972" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.348 [INFO][4553] k8s.go 615: Releasing IP address(es) ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.348 [INFO][4553] utils.go 188: Calico CNI releasing IP address ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.377 [INFO][4560] ipam_plugin.go 417: Releasing address using handleID ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.377 [INFO][4560] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.377 [INFO][4560] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.389 [WARNING][4560] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.389 [INFO][4560] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.391 [INFO][4560] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:55.396954 containerd[1589]: 2024-10-09 01:04:55.394 [INFO][4553] k8s.go 621: Teardown processing complete. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:04:55.398108 containerd[1589]: time="2024-10-09T01:04:55.397100671Z" level=info msg="TearDown network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\" successfully" Oct 9 01:04:55.398108 containerd[1589]: time="2024-10-09T01:04:55.397126071Z" level=info msg="StopPodSandbox for \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\" returns successfully" Oct 9 01:04:55.398108 containerd[1589]: time="2024-10-09T01:04:55.397781755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8w6fh,Uid:962372e4-80c6-4e39-87f0-b400601741aa,Namespace:calico-system,Attempt:1,}" Oct 9 01:04:55.514332 systemd[1]: run-netns-cni\x2d721c15c8\x2df319\x2d3488\x2d1bf7\x2d755cd00ab972.mount: Deactivated successfully. Oct 9 01:04:55.523499 kubelet[3058]: I1009 01:04:55.523390 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9p5pp" podStartSLOduration=31.523342518 podStartE2EDuration="31.523342518s" podCreationTimestamp="2024-10-09 01:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:55.516500721 +0000 UTC m=+44.387128556" watchObservedRunningTime="2024-10-09 01:04:55.523342518 +0000 UTC m=+44.393970313" Oct 9 01:04:55.644549 systemd-networkd[1255]: cali836f73fee49: Link UP Oct 9 01:04:55.645808 systemd-networkd[1255]: cali836f73fee49: Gained carrier Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.456 [INFO][4570] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.474 [INFO][4570] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0 csi-node-driver- calico-system 962372e4-80c6-4e39-87f0-b400601741aa 725 0 2024-10-09 01:04:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4116-0-0-5-47b5cb1617 csi-node-driver-8w6fh eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali836f73fee49 [] []}} ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.474 [INFO][4570] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.587 [INFO][4585] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" HandleID="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.602 [INFO][4585] ipam_plugin.go 270: Auto assigning IP ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" HandleID="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000388160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-5-47b5cb1617", "pod":"csi-node-driver-8w6fh", "timestamp":"2024-10-09 01:04:55.587170426 +0000 UTC"}, Hostname:"ci-4116-0-0-5-47b5cb1617", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.602 [INFO][4585] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.602 [INFO][4585] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.602 [INFO][4585] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-5-47b5cb1617' Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.604 [INFO][4585] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.611 [INFO][4585] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.618 [INFO][4585] ipam.go 489: Trying affinity for 192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.620 [INFO][4585] ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.623 [INFO][4585] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.623 [INFO][4585] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.625 [INFO][4585] ipam.go 1685: Creating new handle: k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.629 [INFO][4585] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.637 [INFO][4585] ipam.go 1216: Successfully claimed IPs: [192.168.22.3/26] block=192.168.22.0/26 handle="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.637 [INFO][4585] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.3/26] handle="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.637 [INFO][4585] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:55.662383 containerd[1589]: 2024-10-09 01:04:55.637 [INFO][4585] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.3/26] IPv6=[] ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" HandleID="k8s-pod-network.5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.663353 containerd[1589]: 2024-10-09 01:04:55.640 [INFO][4570] k8s.go 386: Populated endpoint ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"962372e4-80c6-4e39-87f0-b400601741aa", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"", Pod:"csi-node-driver-8w6fh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali836f73fee49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:55.663353 containerd[1589]: 2024-10-09 01:04:55.640 [INFO][4570] k8s.go 387: Calico CNI using IPs: [192.168.22.3/32] ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.663353 containerd[1589]: 2024-10-09 01:04:55.640 [INFO][4570] dataplane_linux.go 68: Setting the host side veth name to cali836f73fee49 ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.663353 containerd[1589]: 2024-10-09 01:04:55.646 [INFO][4570] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.663353 containerd[1589]: 2024-10-09 01:04:55.646 [INFO][4570] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"962372e4-80c6-4e39-87f0-b400601741aa", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e", Pod:"csi-node-driver-8w6fh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali836f73fee49", MAC:"f2:74:af:f9:7a:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:55.663353 containerd[1589]: 2024-10-09 01:04:55.660 [INFO][4570] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e" Namespace="calico-system" Pod="csi-node-driver-8w6fh" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:04:55.682611 containerd[1589]: time="2024-10-09T01:04:55.682511185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:55.682611 containerd[1589]: time="2024-10-09T01:04:55.682571865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:55.682994 containerd[1589]: time="2024-10-09T01:04:55.682833426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:55.683228 containerd[1589]: time="2024-10-09T01:04:55.683109868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:55.747665 containerd[1589]: time="2024-10-09T01:04:55.747618659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8w6fh,Uid:962372e4-80c6-4e39-87f0-b400601741aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e\"" Oct 9 01:04:55.927369 systemd-networkd[1255]: cali71c582822fb: Gained IPv6LL Oct 9 01:04:56.283623 containerd[1589]: time="2024-10-09T01:04:56.283127400Z" level=info msg="StopPodSandbox for \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\"" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.354 [INFO][4669] k8s.go 608: Cleaning up netns ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.354 [INFO][4669] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" iface="eth0" netns="/var/run/netns/cni-bbf09dd4-bf9a-aaec-2292-f163cbc806be" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.355 [INFO][4669] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" iface="eth0" netns="/var/run/netns/cni-bbf09dd4-bf9a-aaec-2292-f163cbc806be" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.355 [INFO][4669] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" iface="eth0" netns="/var/run/netns/cni-bbf09dd4-bf9a-aaec-2292-f163cbc806be" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.355 [INFO][4669] k8s.go 615: Releasing IP address(es) ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.355 [INFO][4669] utils.go 188: Calico CNI releasing IP address ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.384 [INFO][4675] ipam_plugin.go 417: Releasing address using handleID ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.385 [INFO][4675] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.385 [INFO][4675] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.398 [WARNING][4675] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.398 [INFO][4675] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.399 [INFO][4675] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:56.403618 containerd[1589]: 2024-10-09 01:04:56.401 [INFO][4669] k8s.go 621: Teardown processing complete. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:04:56.406160 containerd[1589]: time="2024-10-09T01:04:56.406110063Z" level=info msg="TearDown network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\" successfully" Oct 9 01:04:56.406160 containerd[1589]: time="2024-10-09T01:04:56.406152583Z" level=info msg="StopPodSandbox for \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\" returns successfully" Oct 9 01:04:56.407532 containerd[1589]: time="2024-10-09T01:04:56.407188749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9ssv4,Uid:50252bd4-6374-4684-812d-2492dbd150ac,Namespace:kube-system,Attempt:1,}" Oct 9 01:04:56.407919 systemd[1]: run-netns-cni\x2dbbf09dd4\x2dbf9a\x2daaec\x2d2292\x2df163cbc806be.mount: Deactivated successfully. Oct 9 01:04:56.603066 systemd-networkd[1255]: cali3a3310afbe8: Link UP Oct 9 01:04:56.604139 systemd-networkd[1255]: cali3a3310afbe8: Gained carrier Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.471 [INFO][4689] utils.go 100: File /var/lib/calico/mtu does not exist Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.492 [INFO][4689] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0 coredns-76f75df574- kube-system 50252bd4-6374-4684-812d-2492dbd150ac 739 0 2024-10-09 01:04:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-5-47b5cb1617 coredns-76f75df574-9ssv4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a3310afbe8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.492 [INFO][4689] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.541 [INFO][4706] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" HandleID="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.563 [INFO][4706] ipam_plugin.go 270: Auto assigning IP ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" HandleID="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001f9780), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-5-47b5cb1617", "pod":"coredns-76f75df574-9ssv4", "timestamp":"2024-10-09 01:04:56.541655514 +0000 UTC"}, Hostname:"ci-4116-0-0-5-47b5cb1617", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.563 [INFO][4706] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.563 [INFO][4706] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.563 [INFO][4706] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-5-47b5cb1617' Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.566 [INFO][4706] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.572 [INFO][4706] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.576 [INFO][4706] ipam.go 489: Trying affinity for 192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.579 [INFO][4706] ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.581 [INFO][4706] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.581 [INFO][4706] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.583 [INFO][4706] ipam.go 1685: Creating new handle: k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.589 [INFO][4706] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.595 [INFO][4706] ipam.go 1216: Successfully claimed IPs: [192.168.22.4/26] block=192.168.22.0/26 handle="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.596 [INFO][4706] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.4/26] handle="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.596 [INFO][4706] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:04:56.621602 containerd[1589]: 2024-10-09 01:04:56.596 [INFO][4706] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.4/26] IPv6=[] ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" HandleID="k8s-pod-network.1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.622537 containerd[1589]: 2024-10-09 01:04:56.598 [INFO][4689] k8s.go 386: Populated endpoint ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"50252bd4-6374-4684-812d-2492dbd150ac", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"", Pod:"coredns-76f75df574-9ssv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a3310afbe8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:56.622537 containerd[1589]: 2024-10-09 01:04:56.598 [INFO][4689] k8s.go 387: Calico CNI using IPs: [192.168.22.4/32] ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.622537 containerd[1589]: 2024-10-09 01:04:56.598 [INFO][4689] dataplane_linux.go 68: Setting the host side veth name to cali3a3310afbe8 ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.622537 containerd[1589]: 2024-10-09 01:04:56.603 [INFO][4689] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.622537 containerd[1589]: 2024-10-09 01:04:56.603 [INFO][4689] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"50252bd4-6374-4684-812d-2492dbd150ac", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b", Pod:"coredns-76f75df574-9ssv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a3310afbe8", MAC:"9a:c3:fc:4b:53:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:04:56.622537 containerd[1589]: 2024-10-09 01:04:56.619 [INFO][4689] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b" Namespace="kube-system" Pod="coredns-76f75df574-9ssv4" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:04:56.630179 systemd-networkd[1255]: cali9fa114acc1a: Gained IPv6LL Oct 9 01:04:56.676597 containerd[1589]: time="2024-10-09T01:04:56.676207320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:04:56.676597 containerd[1589]: time="2024-10-09T01:04:56.676546442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:04:56.677175 containerd[1589]: time="2024-10-09T01:04:56.677016765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:56.677326 containerd[1589]: time="2024-10-09T01:04:56.677268406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:04:56.729058 containerd[1589]: time="2024-10-09T01:04:56.728623443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9ssv4,Uid:50252bd4-6374-4684-812d-2492dbd150ac,Namespace:kube-system,Attempt:1,} returns sandbox id \"1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b\"" Oct 9 01:04:56.732408 containerd[1589]: time="2024-10-09T01:04:56.732368503Z" level=info msg="CreateContainer within sandbox \"1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:04:56.756795 containerd[1589]: time="2024-10-09T01:04:56.756669394Z" level=info msg="CreateContainer within sandbox \"1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6231ed09be1c0cafb25594ff1a414f37c312fef7f1b9dfe998d4cb219f99c39d\"" Oct 9 01:04:56.758226 containerd[1589]: time="2024-10-09T01:04:56.758184283Z" level=info msg="StartContainer for \"6231ed09be1c0cafb25594ff1a414f37c312fef7f1b9dfe998d4cb219f99c39d\"" Oct 9 01:04:56.828111 containerd[1589]: time="2024-10-09T01:04:56.828071060Z" level=info msg="StartContainer for \"6231ed09be1c0cafb25594ff1a414f37c312fef7f1b9dfe998d4cb219f99c39d\" returns successfully" Oct 9 01:04:57.334854 systemd-networkd[1255]: cali836f73fee49: Gained IPv6LL Oct 9 01:04:57.390326 containerd[1589]: time="2024-10-09T01:04:57.390254514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:57.392282 containerd[1589]: time="2024-10-09T01:04:57.392220485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 9 01:04:57.393292 containerd[1589]: time="2024-10-09T01:04:57.393244850Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:57.397655 containerd[1589]: time="2024-10-09T01:04:57.397588753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:57.399981 containerd[1589]: time="2024-10-09T01:04:57.398974761Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 2.516713377s" Oct 9 01:04:57.399981 containerd[1589]: time="2024-10-09T01:04:57.399025041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 9 01:04:57.427613 containerd[1589]: time="2024-10-09T01:04:57.427570034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:04:57.437233 containerd[1589]: time="2024-10-09T01:04:57.437186445Z" level=info msg="CreateContainer within sandbox \"1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:04:57.447284 containerd[1589]: time="2024-10-09T01:04:57.447115538Z" level=info msg="CreateContainer within sandbox \"1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab\"" Oct 9 01:04:57.450430 containerd[1589]: time="2024-10-09T01:04:57.449126269Z" level=info msg="StartContainer for \"3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab\"" Oct 9 01:04:57.511171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019672231.mount: Deactivated successfully. Oct 9 01:04:57.550607 containerd[1589]: time="2024-10-09T01:04:57.550561251Z" level=info msg="StartContainer for \"3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab\" returns successfully" Oct 9 01:04:57.578307 kubelet[3058]: I1009 01:04:57.578191 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9ssv4" podStartSLOduration=33.578051958 podStartE2EDuration="33.578051958s" podCreationTimestamp="2024-10-09 01:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:04:57.576990513 +0000 UTC m=+46.447618428" watchObservedRunningTime="2024-10-09 01:04:57.578051958 +0000 UTC m=+46.448679793" Oct 9 01:04:58.302340 kubelet[3058]: I1009 01:04:58.302279 3058 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:04:58.358211 systemd-networkd[1255]: cali3a3310afbe8: Gained IPv6LL Oct 9 01:04:58.628560 kubelet[3058]: I1009 01:04:58.628317 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77fc9fbf75-p4vjf" podStartSLOduration=25.105365855 podStartE2EDuration="27.628168545s" podCreationTimestamp="2024-10-09 01:04:31 +0000 UTC" firstStartedPulling="2024-10-09 01:04:54.878356482 +0000 UTC m=+43.748984317" lastFinishedPulling="2024-10-09 01:04:57.401159172 +0000 UTC m=+46.271787007" observedRunningTime="2024-10-09 01:04:58.571226923 +0000 UTC m=+47.441854798" watchObservedRunningTime="2024-10-09 01:04:58.628168545 +0000 UTC m=+47.498796380" Oct 9 01:04:59.025606 containerd[1589]: time="2024-10-09T01:04:59.025147329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:59.026962 containerd[1589]: time="2024-10-09T01:04:59.026650737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 9 01:04:59.028534 containerd[1589]: time="2024-10-09T01:04:59.028433506Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:59.032467 containerd[1589]: time="2024-10-09T01:04:59.032397927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:04:59.034800 containerd[1589]: time="2024-10-09T01:04:59.033133811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.605521857s" Oct 9 01:04:59.034800 containerd[1589]: time="2024-10-09T01:04:59.033170611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 9 01:04:59.037553 containerd[1589]: time="2024-10-09T01:04:59.037519114Z" level=info msg="CreateContainer within sandbox \"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:04:59.055546 containerd[1589]: time="2024-10-09T01:04:59.055504488Z" level=info msg="CreateContainer within sandbox \"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bc5fba3d461077d40c9245ff86cc7cd079c1461a81404775608780f0e643e4a5\"" Oct 9 01:04:59.056598 containerd[1589]: time="2024-10-09T01:04:59.056559254Z" level=info msg="StartContainer for \"bc5fba3d461077d40c9245ff86cc7cd079c1461a81404775608780f0e643e4a5\"" Oct 9 01:04:59.177622 containerd[1589]: time="2024-10-09T01:04:59.177391409Z" level=info msg="StartContainer for \"bc5fba3d461077d40c9245ff86cc7cd079c1461a81404775608780f0e643e4a5\" returns successfully" Oct 9 01:04:59.179728 containerd[1589]: time="2024-10-09T01:04:59.179640821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:04:59.456969 kernel: bpftool[4989]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:04:59.678917 systemd-networkd[1255]: vxlan.calico: Link UP Oct 9 01:04:59.679048 systemd-networkd[1255]: vxlan.calico: Gained carrier Oct 9 01:05:00.940768 containerd[1589]: time="2024-10-09T01:05:00.940588715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:00.942213 containerd[1589]: time="2024-10-09T01:05:00.942019082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 9 01:05:00.943602 containerd[1589]: time="2024-10-09T01:05:00.943241248Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:00.946686 containerd[1589]: time="2024-10-09T01:05:00.946645666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:00.948324 containerd[1589]: time="2024-10-09T01:05:00.948274155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.768598454s" Oct 9 01:05:00.948473 containerd[1589]: time="2024-10-09T01:05:00.948450795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 9 01:05:00.953011 containerd[1589]: time="2024-10-09T01:05:00.952967419Z" level=info msg="CreateContainer within sandbox \"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:05:00.971430 containerd[1589]: time="2024-10-09T01:05:00.971370635Z" level=info msg="CreateContainer within sandbox \"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f7c836862bad0fef5eaf2087137be43f8855cbf8673d340169d071f0e0167e8f\"" Oct 9 01:05:00.977025 containerd[1589]: time="2024-10-09T01:05:00.975899179Z" level=info msg="StartContainer for \"f7c836862bad0fef5eaf2087137be43f8855cbf8673d340169d071f0e0167e8f\"" Oct 9 01:05:01.048752 containerd[1589]: time="2024-10-09T01:05:01.048707356Z" level=info msg="StartContainer for \"f7c836862bad0fef5eaf2087137be43f8855cbf8673d340169d071f0e0167e8f\" returns successfully" Oct 9 01:05:01.447558 kubelet[3058]: I1009 01:05:01.447351 3058 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:05:01.447558 kubelet[3058]: I1009 01:05:01.447404 3058 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:05:01.495758 systemd-networkd[1255]: vxlan.calico: Gained IPv6LL Oct 9 01:05:11.281131 containerd[1589]: time="2024-10-09T01:05:11.281084846Z" level=info msg="StopPodSandbox for \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\"" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.340 [WARNING][5141] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7aafddbc-ac54-42e6-b889-af97eef6990a", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39", Pod:"coredns-76f75df574-9p5pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71c582822fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.340 [INFO][5141] k8s.go 608: Cleaning up netns ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.340 [INFO][5141] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" iface="eth0" netns="" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.340 [INFO][5141] k8s.go 615: Releasing IP address(es) ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.340 [INFO][5141] utils.go 188: Calico CNI releasing IP address ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.403 [INFO][5149] ipam_plugin.go 417: Releasing address using handleID ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.403 [INFO][5149] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.403 [INFO][5149] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.415 [WARNING][5149] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.415 [INFO][5149] ipam_plugin.go 445: Releasing address using workloadID ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.418 [INFO][5149] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:11.422466 containerd[1589]: 2024-10-09 01:05:11.420 [INFO][5141] k8s.go 621: Teardown processing complete. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.422466 containerd[1589]: time="2024-10-09T01:05:11.422172398Z" level=info msg="TearDown network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\" successfully" Oct 9 01:05:11.422466 containerd[1589]: time="2024-10-09T01:05:11.422194598Z" level=info msg="StopPodSandbox for \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\" returns successfully" Oct 9 01:05:11.424201 containerd[1589]: time="2024-10-09T01:05:11.422761161Z" level=info msg="RemovePodSandbox for \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\"" Oct 9 01:05:11.424201 containerd[1589]: time="2024-10-09T01:05:11.422788841Z" level=info msg="Forcibly stopping sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\"" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.469 [WARNING][5168] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7aafddbc-ac54-42e6-b889-af97eef6990a", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"7aac8eb95a3abda3f4b80d577be4ad239c37b3adf7034437f26a3f0094372c39", Pod:"coredns-76f75df574-9p5pp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71c582822fb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.470 [INFO][5168] k8s.go 608: Cleaning up netns ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.470 [INFO][5168] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" iface="eth0" netns="" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.470 [INFO][5168] k8s.go 615: Releasing IP address(es) ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.470 [INFO][5168] utils.go 188: Calico CNI releasing IP address ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.496 [INFO][5174] ipam_plugin.go 417: Releasing address using handleID ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.497 [INFO][5174] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.497 [INFO][5174] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.513 [WARNING][5174] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.514 [INFO][5174] ipam_plugin.go 445: Releasing address using workloadID ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" HandleID="k8s-pod-network.691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9p5pp-eth0" Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.516 [INFO][5174] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:11.521017 containerd[1589]: 2024-10-09 01:05:11.519 [INFO][5168] k8s.go 621: Teardown processing complete. ContainerID="691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795" Oct 9 01:05:11.521825 containerd[1589]: time="2024-10-09T01:05:11.521565191Z" level=info msg="TearDown network for sandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\" successfully" Oct 9 01:05:11.527642 containerd[1589]: time="2024-10-09T01:05:11.527601140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:05:11.528125 containerd[1589]: time="2024-10-09T01:05:11.528003902Z" level=info msg="RemovePodSandbox \"691699a27e2d553b7f0670fa17f14a4e7cf30f87c815fedaf52f1923f508f795\" returns successfully" Oct 9 01:05:11.530575 containerd[1589]: time="2024-10-09T01:05:11.530545674Z" level=info msg="StopPodSandbox for \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\"" Oct 9 01:05:11.542224 systemd[1]: run-containerd-runc-k8s.io-3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab-runc.fNllO0.mount: Deactivated successfully. Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.587 [WARNING][5208] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"50252bd4-6374-4684-812d-2492dbd150ac", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b", Pod:"coredns-76f75df574-9ssv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a3310afbe8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.588 [INFO][5208] k8s.go 608: Cleaning up netns ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.588 [INFO][5208] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" iface="eth0" netns="" Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.588 [INFO][5208] k8s.go 615: Releasing IP address(es) ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.588 [INFO][5208] utils.go 188: Calico CNI releasing IP address ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.627 [INFO][5217] ipam_plugin.go 417: Releasing address using handleID ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.627 [INFO][5217] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.627 [INFO][5217] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.638 [WARNING][5217] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.638 [INFO][5217] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.641 [INFO][5217] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:11.648976 containerd[1589]: 2024-10-09 01:05:11.645 [INFO][5208] k8s.go 621: Teardown processing complete. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.650432 containerd[1589]: time="2024-10-09T01:05:11.649053919Z" level=info msg="TearDown network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\" successfully" Oct 9 01:05:11.650432 containerd[1589]: time="2024-10-09T01:05:11.649088919Z" level=info msg="StopPodSandbox for \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\" returns successfully" Oct 9 01:05:11.667956 containerd[1589]: time="2024-10-09T01:05:11.649828522Z" level=info msg="RemovePodSandbox for \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\"" Oct 9 01:05:11.667956 containerd[1589]: time="2024-10-09T01:05:11.667890288Z" level=info msg="Forcibly stopping sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\"" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.709 [WARNING][5235] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"50252bd4-6374-4684-812d-2492dbd150ac", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"1fca956cdc8263d89458e25f5318d3866c33c1bef54ae1e7cce1d35564d2fe5b", Pod:"coredns-76f75df574-9ssv4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.22.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a3310afbe8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.709 [INFO][5235] k8s.go 608: Cleaning up netns ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.709 [INFO][5235] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" iface="eth0" netns="" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.709 [INFO][5235] k8s.go 615: Releasing IP address(es) ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.709 [INFO][5235] utils.go 188: Calico CNI releasing IP address ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.734 [INFO][5241] ipam_plugin.go 417: Releasing address using handleID ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.734 [INFO][5241] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.734 [INFO][5241] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.743 [WARNING][5241] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.744 [INFO][5241] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" HandleID="k8s-pod-network.f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-coredns--76f75df574--9ssv4-eth0" Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.747 [INFO][5241] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:11.751491 containerd[1589]: 2024-10-09 01:05:11.749 [INFO][5235] k8s.go 621: Teardown processing complete. ContainerID="f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc" Oct 9 01:05:11.753223 containerd[1589]: time="2024-10-09T01:05:11.752080969Z" level=info msg="TearDown network for sandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\" successfully" Oct 9 01:05:11.759896 containerd[1589]: time="2024-10-09T01:05:11.759844126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:05:11.760143 containerd[1589]: time="2024-10-09T01:05:11.760121368Z" level=info msg="RemovePodSandbox \"f68cfd58e8060a76d6316166a894ab13cc94b797df4b37fecd33ce8aebedd9cc\" returns successfully" Oct 9 01:05:11.760722 containerd[1589]: time="2024-10-09T01:05:11.760694410Z" level=info msg="StopPodSandbox for \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\"" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.802 [WARNING][5259] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"962372e4-80c6-4e39-87f0-b400601741aa", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e", Pod:"csi-node-driver-8w6fh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali836f73fee49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.803 [INFO][5259] k8s.go 608: Cleaning up netns ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.803 [INFO][5259] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" iface="eth0" netns="" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.803 [INFO][5259] k8s.go 615: Releasing IP address(es) ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.803 [INFO][5259] utils.go 188: Calico CNI releasing IP address ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.822 [INFO][5265] ipam_plugin.go 417: Releasing address using handleID ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.822 [INFO][5265] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.822 [INFO][5265] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.837 [WARNING][5265] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.837 [INFO][5265] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.841 [INFO][5265] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:11.845522 containerd[1589]: 2024-10-09 01:05:11.844 [INFO][5259] k8s.go 621: Teardown processing complete. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.846076 containerd[1589]: time="2024-10-09T01:05:11.845562735Z" level=info msg="TearDown network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\" successfully" Oct 9 01:05:11.846076 containerd[1589]: time="2024-10-09T01:05:11.845585695Z" level=info msg="StopPodSandbox for \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\" returns successfully" Oct 9 01:05:11.846398 containerd[1589]: time="2024-10-09T01:05:11.846378459Z" level=info msg="RemovePodSandbox for \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\"" Oct 9 01:05:11.846442 containerd[1589]: time="2024-10-09T01:05:11.846413979Z" level=info msg="Forcibly stopping sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\"" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.886 [WARNING][5283] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"962372e4-80c6-4e39-87f0-b400601741aa", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"5744e2fa9165c19b6cfb8bc8c1a1f0739c9255ccde2776f8fc3b45711516480e", Pod:"csi-node-driver-8w6fh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.22.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali836f73fee49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.887 [INFO][5283] k8s.go 608: Cleaning up netns ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.887 [INFO][5283] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" iface="eth0" netns="" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.887 [INFO][5283] k8s.go 615: Releasing IP address(es) ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.887 [INFO][5283] utils.go 188: Calico CNI releasing IP address ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.908 [INFO][5289] ipam_plugin.go 417: Releasing address using handleID ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.908 [INFO][5289] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.909 [INFO][5289] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.920 [WARNING][5289] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.920 [INFO][5289] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" HandleID="k8s-pod-network.fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Workload="ci--4116--0--0--5--47b5cb1617-k8s-csi--node--driver--8w6fh-eth0" Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.922 [INFO][5289] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:11.926759 containerd[1589]: 2024-10-09 01:05:11.924 [INFO][5283] k8s.go 621: Teardown processing complete. ContainerID="fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc" Oct 9 01:05:11.927371 containerd[1589]: time="2024-10-09T01:05:11.926806962Z" level=info msg="TearDown network for sandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\" successfully" Oct 9 01:05:11.930960 containerd[1589]: time="2024-10-09T01:05:11.930900941Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:05:11.931357 containerd[1589]: time="2024-10-09T01:05:11.930988142Z" level=info msg="RemovePodSandbox \"fdb4a023342fb3abaf0a562cff6e947460e85a644da50b5abcc5c7074a196dbc\" returns successfully" Oct 9 01:05:11.931559 containerd[1589]: time="2024-10-09T01:05:11.931536824Z" level=info msg="StopPodSandbox for \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\"" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.969 [WARNING][5307] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0", GenerateName:"calico-kube-controllers-77fc9fbf75-", Namespace:"calico-system", SelfLink:"", UID:"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fc9fbf75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e", Pod:"calico-kube-controllers-77fc9fbf75-p4vjf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fa114acc1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.970 [INFO][5307] k8s.go 608: Cleaning up netns ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.970 [INFO][5307] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" iface="eth0" netns="" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.970 [INFO][5307] k8s.go 615: Releasing IP address(es) ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.970 [INFO][5307] utils.go 188: Calico CNI releasing IP address ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.991 [INFO][5313] ipam_plugin.go 417: Releasing address using handleID ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.991 [INFO][5313] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:11.992 [INFO][5313] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:12.003 [WARNING][5313] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:12.003 [INFO][5313] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:12.005 [INFO][5313] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:12.008991 containerd[1589]: 2024-10-09 01:05:12.007 [INFO][5307] k8s.go 621: Teardown processing complete. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.009995 containerd[1589]: time="2024-10-09T01:05:12.009021273Z" level=info msg="TearDown network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\" successfully" Oct 9 01:05:12.009995 containerd[1589]: time="2024-10-09T01:05:12.009044153Z" level=info msg="StopPodSandbox for \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\" returns successfully" Oct 9 01:05:12.009995 containerd[1589]: time="2024-10-09T01:05:12.009583716Z" level=info msg="RemovePodSandbox for \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\"" Oct 9 01:05:12.009995 containerd[1589]: time="2024-10-09T01:05:12.009609916Z" level=info msg="Forcibly stopping sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\"" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.044 [WARNING][5331] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0", GenerateName:"calico-kube-controllers-77fc9fbf75-", Namespace:"calico-system", SelfLink:"", UID:"bc7f5ff4-df0c-49c4-ac5a-fd523d493fc3", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 4, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77fc9fbf75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"1a12c3f886f6278c78e21525983db8b75336dfa446a25cf2c0fc0ba29b56500e", Pod:"calico-kube-controllers-77fc9fbf75-p4vjf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.22.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9fa114acc1a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.044 [INFO][5331] k8s.go 608: Cleaning up netns ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.044 [INFO][5331] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" iface="eth0" netns="" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.045 [INFO][5331] k8s.go 615: Releasing IP address(es) ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.045 [INFO][5331] utils.go 188: Calico CNI releasing IP address ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.064 [INFO][5337] ipam_plugin.go 417: Releasing address using handleID ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.064 [INFO][5337] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.064 [INFO][5337] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.074 [WARNING][5337] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.074 [INFO][5337] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" HandleID="k8s-pod-network.2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--kube--controllers--77fc9fbf75--p4vjf-eth0" Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.076 [INFO][5337] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:12.078795 containerd[1589]: 2024-10-09 01:05:12.077 [INFO][5331] k8s.go 621: Teardown processing complete. ContainerID="2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad" Oct 9 01:05:12.079433 containerd[1589]: time="2024-10-09T01:05:12.078830403Z" level=info msg="TearDown network for sandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\" successfully" Oct 9 01:05:12.086313 containerd[1589]: time="2024-10-09T01:05:12.086236438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:05:12.086662 containerd[1589]: time="2024-10-09T01:05:12.086486719Z" level=info msg="RemovePodSandbox \"2cd452a161675ef561e586c6790e7c4a62460ebe9ee867554f42a78cf4758fad\" returns successfully" Oct 9 01:05:21.762128 kubelet[3058]: I1009 01:05:21.762051 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8w6fh" podStartSLOduration=45.561724454 podStartE2EDuration="50.761950027s" podCreationTimestamp="2024-10-09 01:04:31 +0000 UTC" firstStartedPulling="2024-10-09 01:04:55.749332308 +0000 UTC m=+44.619960143" lastFinishedPulling="2024-10-09 01:05:00.949557841 +0000 UTC m=+49.820185716" observedRunningTime="2024-10-09 01:05:01.588891347 +0000 UTC m=+50.459519182" watchObservedRunningTime="2024-10-09 01:05:21.761950027 +0000 UTC m=+70.632577902" Oct 9 01:05:24.955612 kubelet[3058]: I1009 01:05:24.954252 3058 topology_manager.go:215] "Topology Admit Handler" podUID="016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e" podNamespace="calico-apiserver" podName="calico-apiserver-7f7f68f8df-q8nfg" Oct 9 01:05:24.959970 kubelet[3058]: I1009 01:05:24.958044 3058 topology_manager.go:215] "Topology Admit Handler" podUID="0558a615-c757-4f16-b986-426af76bd164" podNamespace="calico-apiserver" podName="calico-apiserver-7f7f68f8df-fb5jk" Oct 9 01:05:25.102039 kubelet[3058]: I1009 01:05:25.101854 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8gn\" (UniqueName: \"kubernetes.io/projected/0558a615-c757-4f16-b986-426af76bd164-kube-api-access-9c8gn\") pod \"calico-apiserver-7f7f68f8df-fb5jk\" (UID: \"0558a615-c757-4f16-b986-426af76bd164\") " pod="calico-apiserver/calico-apiserver-7f7f68f8df-fb5jk" Oct 9 01:05:25.102737 kubelet[3058]: I1009 01:05:25.102626 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0558a615-c757-4f16-b986-426af76bd164-calico-apiserver-certs\") pod \"calico-apiserver-7f7f68f8df-fb5jk\" (UID: \"0558a615-c757-4f16-b986-426af76bd164\") " pod="calico-apiserver/calico-apiserver-7f7f68f8df-fb5jk" Oct 9 01:05:25.103378 kubelet[3058]: I1009 01:05:25.103356 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-927s8\" (UniqueName: \"kubernetes.io/projected/016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e-kube-api-access-927s8\") pod \"calico-apiserver-7f7f68f8df-q8nfg\" (UID: \"016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e\") " pod="calico-apiserver/calico-apiserver-7f7f68f8df-q8nfg" Oct 9 01:05:25.103518 kubelet[3058]: I1009 01:05:25.103506 3058 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e-calico-apiserver-certs\") pod \"calico-apiserver-7f7f68f8df-q8nfg\" (UID: \"016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e\") " pod="calico-apiserver/calico-apiserver-7f7f68f8df-q8nfg" Oct 9 01:05:25.205772 kubelet[3058]: E1009 01:05:25.204348 3058 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:05:25.205772 kubelet[3058]: E1009 01:05:25.204537 3058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e-calico-apiserver-certs podName:016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e nodeName:}" failed. No retries permitted until 2024-10-09 01:05:25.704444503 +0000 UTC m=+74.575072338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e-calico-apiserver-certs") pod "calico-apiserver-7f7f68f8df-q8nfg" (UID: "016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e") : secret "calico-apiserver-certs" not found Oct 9 01:05:25.205772 kubelet[3058]: E1009 01:05:25.204688 3058 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:05:25.205772 kubelet[3058]: E1009 01:05:25.204894 3058 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0558a615-c757-4f16-b986-426af76bd164-calico-apiserver-certs podName:0558a615-c757-4f16-b986-426af76bd164 nodeName:}" failed. No retries permitted until 2024-10-09 01:05:25.704853745 +0000 UTC m=+74.575481580 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0558a615-c757-4f16-b986-426af76bd164-calico-apiserver-certs") pod "calico-apiserver-7f7f68f8df-fb5jk" (UID: "0558a615-c757-4f16-b986-426af76bd164") : secret "calico-apiserver-certs" not found Oct 9 01:05:25.865088 containerd[1589]: time="2024-10-09T01:05:25.864549537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7f68f8df-fb5jk,Uid:0558a615-c757-4f16-b986-426af76bd164,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:05:25.869382 containerd[1589]: time="2024-10-09T01:05:25.869295038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7f68f8df-q8nfg,Uid:016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:05:26.060434 systemd-networkd[1255]: calic4231c0467a: Link UP Oct 9 01:05:26.060734 systemd-networkd[1255]: calic4231c0467a: Gained carrier Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:25.949 [INFO][5391] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0 calico-apiserver-7f7f68f8df- calico-apiserver 0558a615-c757-4f16-b986-426af76bd164 887 0 2024-10-09 01:05:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7f68f8df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-5-47b5cb1617 calico-apiserver-7f7f68f8df-fb5jk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic4231c0467a [] []}} ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:25.949 [INFO][5391] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:25.993 [INFO][5422] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" HandleID="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.013 [INFO][5422] ipam_plugin.go 270: Auto assigning IP ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" HandleID="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002631f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-5-47b5cb1617", "pod":"calico-apiserver-7f7f68f8df-fb5jk", "timestamp":"2024-10-09 01:05:25.99322149 +0000 UTC"}, Hostname:"ci-4116-0-0-5-47b5cb1617", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.013 [INFO][5422] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.013 [INFO][5422] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.014 [INFO][5422] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-5-47b5cb1617' Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.016 [INFO][5422] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.024 [INFO][5422] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.030 [INFO][5422] ipam.go 489: Trying affinity for 192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.032 [INFO][5422] ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.037 [INFO][5422] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.037 [INFO][5422] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.039 [INFO][5422] ipam.go 1685: Creating new handle: k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121 Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.044 [INFO][5422] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.051 [INFO][5422] ipam.go 1216: Successfully claimed IPs: [192.168.22.5/26] block=192.168.22.0/26 handle="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.051 [INFO][5422] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.5/26] handle="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.051 [INFO][5422] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:26.090066 containerd[1589]: 2024-10-09 01:05:26.052 [INFO][5422] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.5/26] IPv6=[] ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" HandleID="k8s-pod-network.8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" Oct 9 01:05:26.090885 containerd[1589]: 2024-10-09 01:05:26.055 [INFO][5391] k8s.go 386: Populated endpoint ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0", GenerateName:"calico-apiserver-7f7f68f8df-", Namespace:"calico-apiserver", SelfLink:"", UID:"0558a615-c757-4f16-b986-426af76bd164", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7f68f8df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"", Pod:"calico-apiserver-7f7f68f8df-fb5jk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4231c0467a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:26.090885 containerd[1589]: 2024-10-09 01:05:26.055 [INFO][5391] k8s.go 387: Calico CNI using IPs: [192.168.22.5/32] ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" Oct 9 01:05:26.090885 containerd[1589]: 2024-10-09 01:05:26.055 [INFO][5391] dataplane_linux.go 68: Setting the host side veth name to calic4231c0467a ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" Oct 9 01:05:26.090885 containerd[1589]: 2024-10-09 01:05:26.059 [INFO][5391] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" Oct 9 01:05:26.090885 containerd[1589]: 2024-10-09 01:05:26.060 [INFO][5391] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0", GenerateName:"calico-apiserver-7f7f68f8df-", Namespace:"calico-apiserver", SelfLink:"", UID:"0558a615-c757-4f16-b986-426af76bd164", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7f68f8df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121", Pod:"calico-apiserver-7f7f68f8df-fb5jk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic4231c0467a", MAC:"46:ac:0b:21:72:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:26.090885 containerd[1589]: 2024-10-09 01:05:26.077 [INFO][5391] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-fb5jk" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--fb5jk-eth0" Oct 9 01:05:26.135082 containerd[1589]: time="2024-10-09T01:05:26.134670414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:26.135082 containerd[1589]: time="2024-10-09T01:05:26.134808894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:26.135406 containerd[1589]: time="2024-10-09T01:05:26.135341056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:26.138723 containerd[1589]: time="2024-10-09T01:05:26.138636270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:26.144709 systemd-networkd[1255]: calica0077e1826: Link UP Oct 9 01:05:26.145583 systemd-networkd[1255]: calica0077e1826: Gained carrier Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:25.947 [INFO][5397] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0 calico-apiserver-7f7f68f8df- calico-apiserver 016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e 889 0 2024-10-09 01:05:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f7f68f8df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-5-47b5cb1617 calico-apiserver-7f7f68f8df-q8nfg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica0077e1826 [] []}} ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:25.947 [INFO][5397] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:25.993 [INFO][5418] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" HandleID="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.019 [INFO][5418] ipam_plugin.go 270: Auto assigning IP ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" HandleID="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-5-47b5cb1617", "pod":"calico-apiserver-7f7f68f8df-q8nfg", "timestamp":"2024-10-09 01:05:25.993601612 +0000 UTC"}, Hostname:"ci-4116-0-0-5-47b5cb1617", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.019 [INFO][5418] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.052 [INFO][5418] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.053 [INFO][5418] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-5-47b5cb1617' Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.057 [INFO][5418] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.080 [INFO][5418] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.091 [INFO][5418] ipam.go 489: Trying affinity for 192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.093 [INFO][5418] ipam.go 155: Attempting to load block cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.096 [INFO][5418] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.22.0/26 host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.097 [INFO][5418] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.22.0/26 handle="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.099 [INFO][5418] ipam.go 1685: Creating new handle: k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.107 [INFO][5418] ipam.go 1203: Writing block in order to claim IPs block=192.168.22.0/26 handle="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.134 [INFO][5418] ipam.go 1216: Successfully claimed IPs: [192.168.22.6/26] block=192.168.22.0/26 handle="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.134 [INFO][5418] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.22.6/26] handle="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" host="ci-4116-0-0-5-47b5cb1617" Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.134 [INFO][5418] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:05:26.173723 containerd[1589]: 2024-10-09 01:05:26.134 [INFO][5418] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.22.6/26] IPv6=[] ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" HandleID="k8s-pod-network.e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Workload="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" Oct 9 01:05:26.174267 containerd[1589]: 2024-10-09 01:05:26.139 [INFO][5397] k8s.go 386: Populated endpoint ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0", GenerateName:"calico-apiserver-7f7f68f8df-", Namespace:"calico-apiserver", SelfLink:"", UID:"016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7f68f8df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"", Pod:"calico-apiserver-7f7f68f8df-q8nfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica0077e1826", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:26.174267 containerd[1589]: 2024-10-09 01:05:26.139 [INFO][5397] k8s.go 387: Calico CNI using IPs: [192.168.22.6/32] ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" Oct 9 01:05:26.174267 containerd[1589]: 2024-10-09 01:05:26.140 [INFO][5397] dataplane_linux.go 68: Setting the host side veth name to calica0077e1826 ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" Oct 9 01:05:26.174267 containerd[1589]: 2024-10-09 01:05:26.148 [INFO][5397] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" Oct 9 01:05:26.174267 containerd[1589]: 2024-10-09 01:05:26.149 [INFO][5397] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0", GenerateName:"calico-apiserver-7f7f68f8df-", Namespace:"calico-apiserver", SelfLink:"", UID:"016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 5, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f7f68f8df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-5-47b5cb1617", ContainerID:"e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b", Pod:"calico-apiserver-7f7f68f8df-q8nfg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.22.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica0077e1826", MAC:"26:49:f3:c8:e4:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:05:26.174267 containerd[1589]: 2024-10-09 01:05:26.169 [INFO][5397] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b" Namespace="calico-apiserver" Pod="calico-apiserver-7f7f68f8df-q8nfg" WorkloadEndpoint="ci--4116--0--0--5--47b5cb1617-k8s-calico--apiserver--7f7f68f8df--q8nfg-eth0" Oct 9 01:05:26.215785 containerd[1589]: time="2024-10-09T01:05:26.215742839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7f68f8df-fb5jk,Uid:0558a615-c757-4f16-b986-426af76bd164,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121\"" Oct 9 01:05:26.217818 containerd[1589]: time="2024-10-09T01:05:26.217751368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:05:26.232112 containerd[1589]: time="2024-10-09T01:05:26.230417022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:05:26.232112 containerd[1589]: time="2024-10-09T01:05:26.230477182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:05:26.232112 containerd[1589]: time="2024-10-09T01:05:26.230489382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:26.232112 containerd[1589]: time="2024-10-09T01:05:26.230576103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:05:26.288991 containerd[1589]: time="2024-10-09T01:05:26.288953751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f7f68f8df-q8nfg,Uid:016b2e01-7d93-49d6-ae2f-9f9a99b5bb4e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b\"" Oct 9 01:05:27.286208 systemd-networkd[1255]: calica0077e1826: Gained IPv6LL Oct 9 01:05:27.926293 systemd-networkd[1255]: calic4231c0467a: Gained IPv6LL Oct 9 01:05:29.011605 containerd[1589]: time="2024-10-09T01:05:29.010576830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 9 01:05:29.014562 containerd[1589]: time="2024-10-09T01:05:29.014516167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.796730559s" Oct 9 01:05:29.014562 containerd[1589]: time="2024-10-09T01:05:29.014563727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 9 01:05:29.016203 containerd[1589]: time="2024-10-09T01:05:29.016164533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:05:29.019418 containerd[1589]: time="2024-10-09T01:05:29.019373627Z" level=info msg="CreateContainer within sandbox \"8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:05:29.030514 containerd[1589]: time="2024-10-09T01:05:29.028877747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:29.033208 containerd[1589]: time="2024-10-09T01:05:29.031760879Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:29.036030 containerd[1589]: time="2024-10-09T01:05:29.035996616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:29.039098 containerd[1589]: time="2024-10-09T01:05:29.039060189Z" level=info msg="CreateContainer within sandbox \"8d970887d548e6aa434057109a0e55ddf58d17a5688279bab67a1b074aeac121\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9fe9cc7f48d6530b14b15e8e43685266d1db6f4c071d044ff4dcfb71fd0e2635\"" Oct 9 01:05:29.042035 containerd[1589]: time="2024-10-09T01:05:29.041003197Z" level=info msg="StartContainer for \"9fe9cc7f48d6530b14b15e8e43685266d1db6f4c071d044ff4dcfb71fd0e2635\"" Oct 9 01:05:29.147759 containerd[1589]: time="2024-10-09T01:05:29.145673835Z" level=info msg="StartContainer for \"9fe9cc7f48d6530b14b15e8e43685266d1db6f4c071d044ff4dcfb71fd0e2635\" returns successfully" Oct 9 01:05:29.404634 containerd[1589]: time="2024-10-09T01:05:29.404269195Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:05:29.406268 containerd[1589]: time="2024-10-09T01:05:29.405999922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 9 01:05:29.409096 containerd[1589]: time="2024-10-09T01:05:29.409032495Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 392.037118ms" Oct 9 01:05:29.409096 containerd[1589]: time="2024-10-09T01:05:29.409085175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 9 01:05:29.413772 containerd[1589]: time="2024-10-09T01:05:29.413460753Z" level=info msg="CreateContainer within sandbox \"e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:05:29.430847 containerd[1589]: time="2024-10-09T01:05:29.430747866Z" level=info msg="CreateContainer within sandbox \"e9ddffd778a809c2c6ec34918541de4c15b03044c842615181d8ed475bdca72b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"242160aebf4c695e56d549422547b9619299af50b66422a3b47a9e6316d9165f\"" Oct 9 01:05:29.433771 containerd[1589]: time="2024-10-09T01:05:29.433157756Z" level=info msg="StartContainer for \"242160aebf4c695e56d549422547b9619299af50b66422a3b47a9e6316d9165f\"" Oct 9 01:05:29.500083 containerd[1589]: time="2024-10-09T01:05:29.500037195Z" level=info msg="StartContainer for \"242160aebf4c695e56d549422547b9619299af50b66422a3b47a9e6316d9165f\" returns successfully" Oct 9 01:05:29.694313 kubelet[3058]: I1009 01:05:29.693742 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7f68f8df-q8nfg" podStartSLOduration=2.573710621 podStartE2EDuration="5.692459239s" podCreationTimestamp="2024-10-09 01:05:24 +0000 UTC" firstStartedPulling="2024-10-09 01:05:26.290724239 +0000 UTC m=+75.161352074" lastFinishedPulling="2024-10-09 01:05:29.409472817 +0000 UTC m=+78.280100692" observedRunningTime="2024-10-09 01:05:29.691572715 +0000 UTC m=+78.562200510" watchObservedRunningTime="2024-10-09 01:05:29.692459239 +0000 UTC m=+78.563087074" Oct 9 01:05:30.163629 kubelet[3058]: I1009 01:05:30.162631 3058 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f7f68f8df-fb5jk" podStartSLOduration=3.365131757 podStartE2EDuration="6.162586199s" podCreationTimestamp="2024-10-09 01:05:24 +0000 UTC" firstStartedPulling="2024-10-09 01:05:26.217416166 +0000 UTC m=+75.088044001" lastFinishedPulling="2024-10-09 01:05:29.014870608 +0000 UTC m=+77.885498443" observedRunningTime="2024-10-09 01:05:29.705185732 +0000 UTC m=+78.575813567" watchObservedRunningTime="2024-10-09 01:05:30.162586199 +0000 UTC m=+79.033214034" Oct 9 01:05:41.544313 systemd[1]: run-containerd-runc-k8s.io-3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab-runc.XR7jQU.mount: Deactivated successfully. Oct 9 01:05:53.329894 systemd[1]: run-containerd-runc-k8s.io-3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab-runc.Dwzbuz.mount: Deactivated successfully. Oct 9 01:08:11.538357 systemd[1]: run-containerd-runc-k8s.io-3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab-runc.cj5Cj5.mount: Deactivated successfully. Oct 9 01:09:03.057215 systemd[1]: Started sshd@8-78.46.183.65:22-147.75.109.163:59514.service - OpenSSH per-connection server daemon (147.75.109.163:59514). Oct 9 01:09:04.064959 sshd[6183]: Accepted publickey for core from 147.75.109.163 port 59514 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:04.067863 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:04.081943 systemd-logind[1565]: New session 8 of user core. Oct 9 01:09:04.085522 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:09:04.876340 sshd[6183]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:04.881681 systemd[1]: sshd@8-78.46.183.65:22-147.75.109.163:59514.service: Deactivated successfully. Oct 9 01:09:04.888493 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:09:04.890909 systemd-logind[1565]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:09:04.892870 systemd-logind[1565]: Removed session 8. Oct 9 01:09:10.046347 systemd[1]: Started sshd@9-78.46.183.65:22-147.75.109.163:41758.service - OpenSSH per-connection server daemon (147.75.109.163:41758). Oct 9 01:09:11.043616 sshd[6198]: Accepted publickey for core from 147.75.109.163 port 41758 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:11.045766 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:11.051993 systemd-logind[1565]: New session 9 of user core. Oct 9 01:09:11.061249 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:09:11.533088 systemd[1]: run-containerd-runc-k8s.io-3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab-runc.7tZJws.mount: Deactivated successfully. Oct 9 01:09:11.812281 sshd[6198]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:11.819018 systemd[1]: sshd@9-78.46.183.65:22-147.75.109.163:41758.service: Deactivated successfully. Oct 9 01:09:11.822667 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:09:11.823950 systemd-logind[1565]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:09:11.824999 systemd-logind[1565]: Removed session 9. Oct 9 01:09:16.982263 systemd[1]: Started sshd@10-78.46.183.65:22-147.75.109.163:41768.service - OpenSSH per-connection server daemon (147.75.109.163:41768). Oct 9 01:09:17.982863 sshd[6241]: Accepted publickey for core from 147.75.109.163 port 41768 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:17.984863 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:17.991076 systemd-logind[1565]: New session 10 of user core. Oct 9 01:09:17.998318 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:09:18.763012 sshd[6241]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:18.768366 systemd[1]: sshd@10-78.46.183.65:22-147.75.109.163:41768.service: Deactivated successfully. Oct 9 01:09:18.774204 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:09:18.777279 systemd-logind[1565]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:09:18.779911 systemd-logind[1565]: Removed session 10. Oct 9 01:09:18.930093 systemd[1]: Started sshd@11-78.46.183.65:22-147.75.109.163:53702.service - OpenSSH per-connection server daemon (147.75.109.163:53702). Oct 9 01:09:19.925763 sshd[6257]: Accepted publickey for core from 147.75.109.163 port 53702 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:19.928242 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:19.935501 systemd-logind[1565]: New session 11 of user core. Oct 9 01:09:19.939434 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:09:20.774954 sshd[6257]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:20.780001 systemd[1]: sshd@11-78.46.183.65:22-147.75.109.163:53702.service: Deactivated successfully. Oct 9 01:09:20.787150 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:09:20.788973 systemd-logind[1565]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:09:20.789879 systemd-logind[1565]: Removed session 11. Oct 9 01:09:20.945376 systemd[1]: Started sshd@12-78.46.183.65:22-147.75.109.163:53712.service - OpenSSH per-connection server daemon (147.75.109.163:53712). Oct 9 01:09:21.944184 sshd[6275]: Accepted publickey for core from 147.75.109.163 port 53712 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:21.946698 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:21.954732 systemd-logind[1565]: New session 12 of user core. Oct 9 01:09:21.959537 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:09:22.711919 sshd[6275]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:22.724989 systemd[1]: sshd@12-78.46.183.65:22-147.75.109.163:53712.service: Deactivated successfully. Oct 9 01:09:22.728135 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:09:22.729650 systemd-logind[1565]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:09:22.731699 systemd-logind[1565]: Removed session 12. Oct 9 01:09:27.882808 systemd[1]: Started sshd@13-78.46.183.65:22-147.75.109.163:47342.service - OpenSSH per-connection server daemon (147.75.109.163:47342). Oct 9 01:09:28.884397 sshd[6316]: Accepted publickey for core from 147.75.109.163 port 47342 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:28.886803 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:28.901520 systemd-logind[1565]: New session 13 of user core. Oct 9 01:09:28.907012 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:09:29.682967 sshd[6316]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:29.689236 systemd[1]: sshd@13-78.46.183.65:22-147.75.109.163:47342.service: Deactivated successfully. Oct 9 01:09:29.693279 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:09:29.694718 systemd-logind[1565]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:09:29.696640 systemd-logind[1565]: Removed session 13. Oct 9 01:09:29.854754 systemd[1]: Started sshd@14-78.46.183.65:22-147.75.109.163:47358.service - OpenSSH per-connection server daemon (147.75.109.163:47358). Oct 9 01:09:30.852250 sshd[6330]: Accepted publickey for core from 147.75.109.163 port 47358 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:30.854124 sshd[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:30.860354 systemd-logind[1565]: New session 14 of user core. Oct 9 01:09:30.871427 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:09:31.814117 sshd[6330]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:31.818507 systemd[1]: sshd@14-78.46.183.65:22-147.75.109.163:47358.service: Deactivated successfully. Oct 9 01:09:31.823669 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:09:31.826008 systemd-logind[1565]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:09:31.828313 systemd-logind[1565]: Removed session 14. Oct 9 01:09:31.981236 systemd[1]: Started sshd@15-78.46.183.65:22-147.75.109.163:47364.service - OpenSSH per-connection server daemon (147.75.109.163:47364). Oct 9 01:09:32.993871 sshd[6343]: Accepted publickey for core from 147.75.109.163 port 47364 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:32.996028 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:33.001756 systemd-logind[1565]: New session 15 of user core. Oct 9 01:09:33.007358 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:09:35.768499 sshd[6343]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:35.777602 systemd[1]: sshd@15-78.46.183.65:22-147.75.109.163:47364.service: Deactivated successfully. Oct 9 01:09:35.780500 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:09:35.782031 systemd-logind[1565]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:09:35.783767 systemd-logind[1565]: Removed session 15. Oct 9 01:09:35.935379 systemd[1]: Started sshd@16-78.46.183.65:22-147.75.109.163:47380.service - OpenSSH per-connection server daemon (147.75.109.163:47380). Oct 9 01:09:36.932081 sshd[6367]: Accepted publickey for core from 147.75.109.163 port 47380 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:36.934771 sshd[6367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:36.943471 systemd-logind[1565]: New session 16 of user core. Oct 9 01:09:36.951456 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:09:37.839306 sshd[6367]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:37.845895 systemd-logind[1565]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:09:37.848028 systemd[1]: sshd@16-78.46.183.65:22-147.75.109.163:47380.service: Deactivated successfully. Oct 9 01:09:37.851813 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:09:37.853591 systemd-logind[1565]: Removed session 16. Oct 9 01:09:38.006311 systemd[1]: Started sshd@17-78.46.183.65:22-147.75.109.163:50894.service - OpenSSH per-connection server daemon (147.75.109.163:50894). Oct 9 01:09:38.993689 sshd[6379]: Accepted publickey for core from 147.75.109.163 port 50894 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:38.995744 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:39.002109 systemd-logind[1565]: New session 17 of user core. Oct 9 01:09:39.011288 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:09:39.777795 sshd[6379]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:39.785411 systemd[1]: sshd@17-78.46.183.65:22-147.75.109.163:50894.service: Deactivated successfully. Oct 9 01:09:39.792919 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:09:39.795369 systemd-logind[1565]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:09:39.797207 systemd-logind[1565]: Removed session 17. Oct 9 01:09:44.947229 systemd[1]: Started sshd@18-78.46.183.65:22-147.75.109.163:50900.service - OpenSSH per-connection server daemon (147.75.109.163:50900). Oct 9 01:09:45.959426 sshd[6420]: Accepted publickey for core from 147.75.109.163 port 50900 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:45.961905 sshd[6420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:45.969322 systemd-logind[1565]: New session 18 of user core. Oct 9 01:09:45.975010 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:09:46.720096 sshd[6420]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:46.723895 systemd[1]: sshd@18-78.46.183.65:22-147.75.109.163:50900.service: Deactivated successfully. Oct 9 01:09:46.730380 systemd-logind[1565]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:09:46.730702 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:09:46.732737 systemd-logind[1565]: Removed session 18. Oct 9 01:09:51.889357 systemd[1]: Started sshd@19-78.46.183.65:22-147.75.109.163:44256.service - OpenSSH per-connection server daemon (147.75.109.163:44256). Oct 9 01:09:52.902587 sshd[6468]: Accepted publickey for core from 147.75.109.163 port 44256 ssh2: RSA SHA256:sKto1mMUpX/NfXJQLv5H1pSd9gRoKrp8Hbo6SFyKe0U Oct 9 01:09:52.903680 sshd[6468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:52.910296 systemd-logind[1565]: New session 19 of user core. Oct 9 01:09:52.920336 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:09:53.305524 systemd[1]: run-containerd-runc-k8s.io-3f6dcb7b58cc1aa86d82f93cfb62d791f9c4b338e5f984fab70739226442ebab-runc.oGrwTs.mount: Deactivated successfully. Oct 9 01:09:53.663083 sshd[6468]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:53.670415 systemd[1]: sshd@19-78.46.183.65:22-147.75.109.163:44256.service: Deactivated successfully. Oct 9 01:09:53.672260 systemd-logind[1565]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:09:53.675783 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:09:53.677660 systemd-logind[1565]: Removed session 19.