Oct 8 19:45:28.928832 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:45:28.928856 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:45:28.928866 kernel: KASLR enabled Oct 8 19:45:28.928871 kernel: efi: EFI v2.7 by EDK II Oct 8 19:45:28.928877 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Oct 8 19:45:28.928883 kernel: random: crng init done Oct 8 19:45:28.928890 kernel: ACPI: Early table checksum verification disabled Oct 8 19:45:28.928896 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Oct 8 19:45:28.928902 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:45:28.928908 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928916 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928922 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928928 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928934 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928941 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928949 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928956 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928962 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:45:28.928969 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:45:28.928975 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 8 19:45:28.928981 kernel: NUMA: Failed to initialise from firmware Oct 8 19:45:28.928988 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:45:28.928994 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Oct 8 19:45:28.929000 kernel: Zone ranges: Oct 8 19:45:28.929007 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:45:28.929013 kernel: DMA32 empty Oct 8 19:45:28.929021 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 8 19:45:28.929027 kernel: Movable zone start for each node Oct 8 19:45:28.929033 kernel: Early memory node ranges Oct 8 19:45:28.929039 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Oct 8 19:45:28.929046 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Oct 8 19:45:28.929052 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Oct 8 19:45:28.929059 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Oct 8 19:45:28.929065 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Oct 8 19:45:28.929071 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:45:28.929078 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 8 19:45:28.929084 kernel: psci: probing for conduit method from ACPI. Oct 8 19:45:28.929092 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:45:28.929098 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:45:28.929105 kernel: psci: Trusted OS migration not required Oct 8 19:45:28.929114 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:45:28.929121 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:45:28.929128 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:45:28.929136 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:45:28.929143 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:45:28.929149 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:45:28.929156 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:45:28.929163 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:45:28.929170 kernel: CPU features: detected: Spectre-v4 Oct 8 19:45:28.929177 kernel: CPU features: detected: Spectre-BHB Oct 8 19:45:28.929183 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:45:28.929190 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:45:28.929197 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:45:28.929204 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:45:28.929212 kernel: alternatives: applying boot alternatives Oct 8 19:45:28.929220 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:45:28.929227 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:45:28.929233 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:45:28.929240 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:45:28.929247 kernel: Fallback order for Node 0: 0 Oct 8 19:45:28.929253 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Oct 8 19:45:28.929260 kernel: Policy zone: Normal Oct 8 19:45:28.929267 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:45:28.929273 kernel: software IO TLB: area num 2. Oct 8 19:45:28.929280 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Oct 8 19:45:28.929317 kernel: Memory: 3881848K/4096000K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 214152K reserved, 0K cma-reserved) Oct 8 19:45:28.929324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:45:28.929331 kernel: trace event string verifier disabled Oct 8 19:45:28.929337 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:45:28.929345 kernel: rcu: RCU event tracing is enabled. Oct 8 19:45:28.929352 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:45:28.929359 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:45:28.929366 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:45:28.929373 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:45:28.929379 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:45:28.929386 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:45:28.929394 kernel: GICv3: 256 SPIs implemented Oct 8 19:45:28.929401 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:45:28.929408 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:45:28.929414 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:45:28.929421 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:45:28.929428 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:45:28.929434 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:45:28.929441 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:45:28.929448 kernel: GICv3: using LPI property table @0x00000001000e0000 Oct 8 19:45:28.929455 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Oct 8 19:45:28.929462 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:45:28.929470 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:28.929477 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:45:28.929483 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:45:28.929490 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:45:28.929497 kernel: Console: colour dummy device 80x25 Oct 8 19:45:28.929504 kernel: ACPI: Core revision 20230628 Oct 8 19:45:28.929511 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:45:28.929518 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:45:28.929525 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:45:28.929532 kernel: SELinux: Initializing. Oct 8 19:45:28.929540 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:45:28.929547 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:45:28.929554 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:45:28.929561 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:45:28.929568 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:45:28.929575 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:45:28.929582 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:45:28.929588 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:45:28.929595 kernel: Remapping and enabling EFI services. Oct 8 19:45:28.929604 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:45:28.929610 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:45:28.929617 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:45:28.929624 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Oct 8 19:45:28.929631 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:45:28.929638 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:45:28.929645 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:45:28.929652 kernel: SMP: Total of 2 processors activated. Oct 8 19:45:28.929659 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:45:28.929665 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:45:28.929674 kernel: CPU features: detected: Common not Private translations Oct 8 19:45:28.929681 kernel: CPU features: detected: CRC32 instructions Oct 8 19:45:28.929693 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:45:28.929702 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:45:28.929709 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:45:28.929716 kernel: CPU features: detected: Privileged Access Never Oct 8 19:45:28.929724 kernel: CPU features: detected: RAS Extension Support Oct 8 19:45:28.929731 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:45:28.929739 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:45:28.929748 kernel: alternatives: applying system-wide alternatives Oct 8 19:45:28.929755 kernel: devtmpfs: initialized Oct 8 19:45:28.929763 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:45:28.929770 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:45:28.929778 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:45:28.929785 kernel: SMBIOS 3.0.0 present. Oct 8 19:45:28.929792 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 8 19:45:28.929801 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:45:28.929808 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:45:28.929816 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:45:28.929823 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:45:28.929830 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:45:28.929838 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Oct 8 19:45:28.929845 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:45:28.929852 kernel: cpuidle: using governor menu Oct 8 19:45:28.929859 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:45:28.929868 kernel: ASID allocator initialised with 32768 entries Oct 8 19:45:28.929876 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:45:28.929883 kernel: Serial: AMBA PL011 UART driver Oct 8 19:45:28.929890 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:45:28.929898 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:45:28.929905 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:45:28.929912 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:45:28.929920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:45:28.929927 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:45:28.929935 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:45:28.929943 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:45:28.929950 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:45:28.929958 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:45:28.929965 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:45:28.929972 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:45:28.929979 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:45:28.929986 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:45:28.929993 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:45:28.930002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:45:28.930009 kernel: ACPI: Interpreter enabled Oct 8 19:45:28.930016 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:45:28.930023 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:45:28.930031 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:45:28.930038 kernel: printk: console [ttyAMA0] enabled Oct 8 19:45:28.930045 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:45:28.930189 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:45:28.930267 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:45:28.932213 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:45:28.932327 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:45:28.932398 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:45:28.932409 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:45:28.932417 kernel: PCI host bridge to bus 0000:00 Oct 8 19:45:28.932492 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:45:28.932555 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:45:28.932622 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:45:28.932682 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:45:28.932772 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:45:28.932852 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Oct 8 19:45:28.932921 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Oct 8 19:45:28.932989 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:45:28.933067 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.933136 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Oct 8 19:45:28.933212 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.933280 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Oct 8 19:45:28.933367 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.933436 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Oct 8 19:45:28.933513 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.933595 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Oct 8 19:45:28.933673 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.933741 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Oct 8 19:45:28.933816 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.933884 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Oct 8 19:45:28.936440 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.936534 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Oct 8 19:45:28.936619 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.936687 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Oct 8 19:45:28.936762 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:45:28.936829 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Oct 8 19:45:28.936913 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Oct 8 19:45:28.936979 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Oct 8 19:45:28.937058 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:45:28.937128 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Oct 8 19:45:28.937196 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:45:28.937265 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:45:28.938430 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 19:45:28.938520 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Oct 8 19:45:28.938600 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 19:45:28.938667 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Oct 8 19:45:28.938733 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Oct 8 19:45:28.938811 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 19:45:28.938879 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Oct 8 19:45:28.938965 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 19:45:28.939036 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Oct 8 19:45:28.939118 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 19:45:28.939185 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Oct 8 19:45:28.939252 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:45:28.939350 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:45:28.939426 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Oct 8 19:45:28.939493 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Oct 8 19:45:28.939560 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:45:28.939632 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 8 19:45:28.941001 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:45:28.941084 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:45:28.941157 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 8 19:45:28.941232 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 8 19:45:28.943342 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 8 19:45:28.943445 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 8 19:45:28.943515 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:45:28.943583 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:45:28.943653 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 8 19:45:28.943719 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 8 19:45:28.943844 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 8 19:45:28.943929 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 8 19:45:28.943997 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 8 19:45:28.944063 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Oct 8 19:45:28.944135 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 8 19:45:28.944212 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:45:28.944279 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:45:28.944467 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 19:45:28.944538 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:45:28.944602 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:45:28.944672 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 19:45:28.944737 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:45:28.944802 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:45:28.944871 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 19:45:28.944940 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:45:28.945006 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:45:28.945076 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Oct 8 19:45:28.945141 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:45:28.945208 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Oct 8 19:45:28.945273 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:45:28.946415 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Oct 8 19:45:28.946488 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:45:28.946565 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Oct 8 19:45:28.946630 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:45:28.946697 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Oct 8 19:45:28.946762 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:45:28.946829 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Oct 8 19:45:28.946895 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:45:28.946963 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Oct 8 19:45:28.947032 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:45:28.947099 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Oct 8 19:45:28.947164 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:45:28.947232 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Oct 8 19:45:28.947315 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:45:28.947394 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Oct 8 19:45:28.947460 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Oct 8 19:45:28.947531 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Oct 8 19:45:28.947596 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 19:45:28.947662 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Oct 8 19:45:28.947728 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 19:45:28.947818 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Oct 8 19:45:28.947885 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 19:45:28.947951 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Oct 8 19:45:28.948016 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 8 19:45:28.948086 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Oct 8 19:45:28.948156 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 8 19:45:28.948227 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Oct 8 19:45:28.950387 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 8 19:45:28.950489 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Oct 8 19:45:28.950558 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 8 19:45:28.950626 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Oct 8 19:45:28.950692 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 8 19:45:28.950765 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Oct 8 19:45:28.950831 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Oct 8 19:45:28.950902 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Oct 8 19:45:28.950976 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Oct 8 19:45:28.951044 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:45:28.951110 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Oct 8 19:45:28.951176 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 19:45:28.951260 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 8 19:45:28.952423 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 8 19:45:28.952505 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:45:28.952579 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Oct 8 19:45:28.952648 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 19:45:28.952720 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 8 19:45:28.952784 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 8 19:45:28.952848 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:45:28.952920 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:45:28.952987 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Oct 8 19:45:28.953061 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 19:45:28.953129 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 8 19:45:28.953197 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 8 19:45:28.953265 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:45:28.954708 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:45:28.954801 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 19:45:28.954922 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 8 19:45:28.955006 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 8 19:45:28.955074 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:45:28.955152 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Oct 8 19:45:28.955223 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 19:45:28.955342 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 8 19:45:28.955413 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 8 19:45:28.955479 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:45:28.955554 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Oct 8 19:45:28.955626 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Oct 8 19:45:28.955696 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 19:45:28.955819 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 8 19:45:28.955891 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 8 19:45:28.955966 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:45:28.956041 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Oct 8 19:45:28.956112 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Oct 8 19:45:28.956180 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Oct 8 19:45:28.956248 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 19:45:28.956343 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 8 19:45:28.956413 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 8 19:45:28.956477 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:45:28.956549 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 19:45:28.956615 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 8 19:45:28.956679 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 8 19:45:28.956745 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:45:28.956813 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 19:45:28.956878 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 8 19:45:28.956943 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 8 19:45:28.957009 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:45:28.957080 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:45:28.957139 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:45:28.957198 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:45:28.957364 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 8 19:45:28.957443 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 8 19:45:28.957509 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:45:28.957759 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 8 19:45:28.958521 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 8 19:45:28.958603 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:45:28.958675 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 8 19:45:28.958736 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 8 19:45:28.958795 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:45:28.958864 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 8 19:45:28.958930 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 8 19:45:28.958991 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:45:28.959058 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 8 19:45:28.959131 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 8 19:45:28.959193 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:45:28.959374 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 8 19:45:28.959467 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 8 19:45:28.959535 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:45:28.959614 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 8 19:45:28.959675 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 8 19:45:28.959751 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:45:28.959826 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 8 19:45:28.959886 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 8 19:45:28.959959 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:45:28.960027 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 8 19:45:28.960088 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 8 19:45:28.960147 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:45:28.960157 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:45:28.960168 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:45:28.960176 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:45:28.960184 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:45:28.960192 kernel: iommu: Default domain type: Translated Oct 8 19:45:28.960200 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:45:28.960208 kernel: efivars: Registered efivars operations Oct 8 19:45:28.960215 kernel: vgaarb: loaded Oct 8 19:45:28.960223 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:45:28.960231 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:45:28.960240 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:45:28.960248 kernel: pnp: PnP ACPI init Oct 8 19:45:28.960342 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:45:28.960355 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:45:28.960362 kernel: NET: Registered PF_INET protocol family Oct 8 19:45:28.960371 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:45:28.960379 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:45:28.960386 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:45:28.960398 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:45:28.960405 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:45:28.960413 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:45:28.960421 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:45:28.960429 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:45:28.960436 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:45:28.960513 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 8 19:45:28.960525 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:45:28.960533 kernel: kvm [1]: HYP mode not available Oct 8 19:45:28.960543 kernel: Initialise system trusted keyrings Oct 8 19:45:28.960551 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:45:28.960559 kernel: Key type asymmetric registered Oct 8 19:45:28.960566 kernel: Asymmetric key parser 'x509' registered Oct 8 19:45:28.960574 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:45:28.960582 kernel: io scheduler mq-deadline registered Oct 8 19:45:28.960589 kernel: io scheduler kyber registered Oct 8 19:45:28.960597 kernel: io scheduler bfq registered Oct 8 19:45:28.960605 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:45:28.960675 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 8 19:45:28.960743 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 8 19:45:28.960822 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.960891 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 8 19:45:28.960958 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 8 19:45:28.961023 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.961097 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 8 19:45:28.961162 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 8 19:45:28.961227 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.961365 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 8 19:45:28.961439 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 8 19:45:28.961504 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.961575 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 8 19:45:28.961640 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 8 19:45:28.961703 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.961786 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 8 19:45:28.961863 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 8 19:45:28.961952 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.962039 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 8 19:45:28.962105 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 8 19:45:28.962182 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.962261 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 8 19:45:28.962347 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 8 19:45:28.962429 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.962445 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 8 19:45:28.962538 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 8 19:45:28.962616 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 8 19:45:28.962696 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:45:28.962708 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:45:28.962718 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:45:28.962727 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:45:28.962812 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Oct 8 19:45:28.962905 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 8 19:45:28.962985 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 8 19:45:28.963000 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:45:28.963009 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:45:28.963081 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 8 19:45:28.963096 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 8 19:45:28.963105 kernel: thunder_xcv, ver 1.0 Oct 8 19:45:28.963113 kernel: thunder_bgx, ver 1.0 Oct 8 19:45:28.963123 kernel: nicpf, ver 1.0 Oct 8 19:45:28.963131 kernel: nicvf, ver 1.0 Oct 8 19:45:28.963221 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:45:28.963315 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:45:28 UTC (1728416728) Oct 8 19:45:28.963328 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:45:28.963337 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:45:28.963346 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:45:28.963355 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:45:28.963368 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:45:28.963377 kernel: Segment Routing with IPv6 Oct 8 19:45:28.963386 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:45:28.963394 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:45:28.963402 kernel: Key type dns_resolver registered Oct 8 19:45:28.963410 kernel: registered taskstats version 1 Oct 8 19:45:28.963418 kernel: Loading compiled-in X.509 certificates Oct 8 19:45:28.963426 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:45:28.963433 kernel: Key type .fscrypt registered Oct 8 19:45:28.963442 kernel: Key type fscrypt-provisioning registered Oct 8 19:45:28.963450 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:45:28.963458 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:45:28.963467 kernel: ima: No architecture policies found Oct 8 19:45:28.963475 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:45:28.963483 kernel: clk: Disabling unused clocks Oct 8 19:45:28.963490 kernel: Freeing unused kernel memory: 39104K Oct 8 19:45:28.963499 kernel: Run /init as init process Oct 8 19:45:28.963506 kernel: with arguments: Oct 8 19:45:28.963516 kernel: /init Oct 8 19:45:28.963527 kernel: with environment: Oct 8 19:45:28.963535 kernel: HOME=/ Oct 8 19:45:28.963543 kernel: TERM=linux Oct 8 19:45:28.964360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:45:28.964408 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:45:28.964424 systemd[1]: Detected virtualization kvm. Oct 8 19:45:28.964439 systemd[1]: Detected architecture arm64. Oct 8 19:45:28.964449 systemd[1]: Running in initrd. Oct 8 19:45:28.964459 systemd[1]: No hostname configured, using default hostname. Oct 8 19:45:28.964468 systemd[1]: Hostname set to . Oct 8 19:45:28.964478 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:45:28.964487 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:45:28.964497 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:45:28.964506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:45:28.964519 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:45:28.964529 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:45:28.964539 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:45:28.964549 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:45:28.964560 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:45:28.964570 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:45:28.964580 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:45:28.964591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:45:28.964601 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:45:28.964611 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:45:28.964620 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:45:28.964629 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:45:28.964639 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:45:28.964648 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:45:28.964656 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:45:28.964665 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:45:28.964675 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:45:28.964683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:45:28.964694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:45:28.964703 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:45:28.964711 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:45:28.964723 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:45:28.964732 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:45:28.964742 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:45:28.964753 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:45:28.964762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:45:28.964772 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:28.964782 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:45:28.964791 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:45:28.964801 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:45:28.964813 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:45:28.964822 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:45:28.964832 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:45:28.964875 systemd-journald[237]: Collecting audit messages is disabled. Oct 8 19:45:28.964900 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:28.964910 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:28.964920 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:45:28.964930 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:45:28.964939 kernel: Bridge firewalling registered Oct 8 19:45:28.964948 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:45:28.964959 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:45:28.964970 systemd-journald[237]: Journal started Oct 8 19:45:28.965000 systemd-journald[237]: Runtime Journal (/run/log/journal/1ee28eda2127430f8ea6df9f6502ee2d) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:45:28.933969 systemd-modules-load[238]: Inserted module 'overlay' Oct 8 19:45:28.966778 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:45:28.957858 systemd-modules-load[238]: Inserted module 'br_netfilter' Oct 8 19:45:28.978475 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:45:28.989374 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:45:28.991345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:29.003792 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:45:29.005397 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:45:29.009452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:45:29.016243 dracut-cmdline[272]: dracut-dracut-053 Oct 8 19:45:29.020505 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:45:29.048521 systemd-resolved[277]: Positive Trust Anchors: Oct 8 19:45:29.048536 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:45:29.048567 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:45:29.058498 systemd-resolved[277]: Defaulting to hostname 'linux'. Oct 8 19:45:29.060255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:45:29.061423 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:45:29.105350 kernel: SCSI subsystem initialized Oct 8 19:45:29.110309 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:45:29.118480 kernel: iscsi: registered transport (tcp) Oct 8 19:45:29.133338 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:45:29.133411 kernel: QLogic iSCSI HBA Driver Oct 8 19:45:29.180098 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:45:29.184613 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:45:29.205380 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:45:29.205478 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:45:29.207344 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:45:29.266095 kernel: raid6: neonx8 gen() 15662 MB/s Oct 8 19:45:29.282353 kernel: raid6: neonx4 gen() 15546 MB/s Oct 8 19:45:29.299354 kernel: raid6: neonx2 gen() 13183 MB/s Oct 8 19:45:29.316348 kernel: raid6: neonx1 gen() 10429 MB/s Oct 8 19:45:29.333349 kernel: raid6: int64x8 gen() 6911 MB/s Oct 8 19:45:29.350411 kernel: raid6: int64x4 gen() 7300 MB/s Oct 8 19:45:29.367377 kernel: raid6: int64x2 gen() 6067 MB/s Oct 8 19:45:29.384356 kernel: raid6: int64x1 gen() 5040 MB/s Oct 8 19:45:29.384440 kernel: raid6: using algorithm neonx8 gen() 15662 MB/s Oct 8 19:45:29.401341 kernel: raid6: .... xor() 11885 MB/s, rmw enabled Oct 8 19:45:29.401426 kernel: raid6: using neon recovery algorithm Oct 8 19:45:29.406380 kernel: xor: measuring software checksum speed Oct 8 19:45:29.406476 kernel: 8regs : 19812 MB/sec Oct 8 19:45:29.406504 kernel: 32regs : 19655 MB/sec Oct 8 19:45:29.406529 kernel: arm64_neon : 26945 MB/sec Oct 8 19:45:29.407326 kernel: xor: using function: arm64_neon (26945 MB/sec) Oct 8 19:45:29.459342 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:45:29.476190 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:45:29.484519 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:45:29.519139 systemd-udevd[456]: Using default interface naming scheme 'v255'. Oct 8 19:45:29.522599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:45:29.532522 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:45:29.549354 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Oct 8 19:45:29.589636 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:45:29.596454 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:45:29.648310 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:45:29.660446 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:45:29.675659 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:45:29.679487 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:45:29.681340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:45:29.682783 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:45:29.690541 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:45:29.708683 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:45:29.736653 kernel: scsi host0: Virtio SCSI HBA Oct 8 19:45:29.740805 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:45:29.740906 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 19:45:29.811790 kernel: ACPI: bus type USB registered Oct 8 19:45:29.811847 kernel: usbcore: registered new interface driver usbfs Oct 8 19:45:29.811858 kernel: usbcore: registered new interface driver hub Oct 8 19:45:29.811867 kernel: usbcore: registered new device driver usb Oct 8 19:45:29.827661 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:45:29.827967 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 19:45:29.828086 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 19:45:29.828212 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:45:29.829515 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 19:45:29.829710 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 19:45:29.830548 kernel: hub 1-0:1.0: USB hub found Oct 8 19:45:29.830734 kernel: hub 1-0:1.0: 4 ports detected Oct 8 19:45:29.831802 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 19:45:29.847606 kernel: hub 2-0:1.0: USB hub found Oct 8 19:45:29.854401 kernel: hub 2-0:1.0: 4 ports detected Oct 8 19:45:29.839210 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:45:29.839383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:29.842918 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:29.843438 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:45:29.843593 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:29.844255 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:29.852670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:29.872497 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 8 19:45:29.874304 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 8 19:45:29.874486 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:45:29.874498 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:45:29.875325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:29.884883 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:45:29.895135 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 8 19:45:29.895396 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 19:45:29.895491 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 8 19:45:29.895576 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 8 19:45:29.896807 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 19:45:29.901253 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:45:29.901320 kernel: GPT:17805311 != 80003071 Oct 8 19:45:29.901340 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:45:29.901350 kernel: GPT:17805311 != 80003071 Oct 8 19:45:29.901361 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:45:29.901370 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:45:29.902651 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 8 19:45:29.921374 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:29.947515 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (511) Oct 8 19:45:29.953614 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 19:45:29.955850 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (502) Oct 8 19:45:29.965531 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 19:45:29.969964 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 19:45:29.970920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 19:45:29.977073 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:45:29.984469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:45:29.990604 disk-uuid[571]: Primary Header is updated. Oct 8 19:45:29.990604 disk-uuid[571]: Secondary Entries is updated. Oct 8 19:45:29.990604 disk-uuid[571]: Secondary Header is updated. Oct 8 19:45:30.004337 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:45:30.009313 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:45:30.068313 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 19:45:30.209954 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 8 19:45:30.210032 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 19:45:30.211297 kernel: usbcore: registered new interface driver usbhid Oct 8 19:45:30.211333 kernel: usbhid: USB HID core driver Oct 8 19:45:30.311395 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 8 19:45:30.441334 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 8 19:45:30.495377 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 8 19:45:31.020333 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:45:31.023557 disk-uuid[572]: The operation has completed successfully. Oct 8 19:45:31.066983 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:45:31.067082 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:45:31.080470 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:45:31.084462 sh[588]: Success Oct 8 19:45:31.098333 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:45:31.151185 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:45:31.171190 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:45:31.174469 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:45:31.188536 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:45:31.188614 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:45:31.189706 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:45:31.189757 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:45:31.189782 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:45:31.195316 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:45:31.197439 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:45:31.198041 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:45:31.208599 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:45:31.211480 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:45:31.230571 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:45:31.230629 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:45:31.230641 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:45:31.237365 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:45:31.237429 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:45:31.250319 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:45:31.250263 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:45:31.256903 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:45:31.262539 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:45:31.340910 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:45:31.349596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:45:31.381205 systemd-networkd[771]: lo: Link UP Oct 8 19:45:31.381784 systemd-networkd[771]: lo: Gained carrier Oct 8 19:45:31.383910 systemd-networkd[771]: Enumeration completed Oct 8 19:45:31.384422 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:45:31.385724 systemd[1]: Reached target network.target - Network. Oct 8 19:45:31.387484 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:31.387487 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:45:31.389636 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:31.389639 systemd-networkd[771]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:45:31.391709 ignition[689]: Ignition 2.18.0 Oct 8 19:45:31.390338 systemd-networkd[771]: eth0: Link UP Oct 8 19:45:31.391716 ignition[689]: Stage: fetch-offline Oct 8 19:45:31.390341 systemd-networkd[771]: eth0: Gained carrier Oct 8 19:45:31.391815 ignition[689]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:45:31.390348 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:31.391825 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:45:31.393658 systemd-networkd[771]: eth1: Link UP Oct 8 19:45:31.391924 ignition[689]: parsed url from cmdline: "" Oct 8 19:45:31.393662 systemd-networkd[771]: eth1: Gained carrier Oct 8 19:45:31.391927 ignition[689]: no config URL provided Oct 8 19:45:31.393671 systemd-networkd[771]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:31.391932 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:45:31.393922 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:45:31.391940 ignition[689]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:45:31.398494 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:45:31.391945 ignition[689]: failed to fetch config: resource requires networking Oct 8 19:45:31.392134 ignition[689]: Ignition finished successfully Oct 8 19:45:31.422254 ignition[780]: Ignition 2.18.0 Oct 8 19:45:31.422266 ignition[780]: Stage: fetch Oct 8 19:45:31.422473 ignition[780]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:45:31.422484 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:45:31.422568 ignition[780]: parsed url from cmdline: "" Oct 8 19:45:31.422571 ignition[780]: no config URL provided Oct 8 19:45:31.422576 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:45:31.422583 ignition[780]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:45:31.422601 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 19:45:31.423184 ignition[780]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 19:45:31.432373 systemd-networkd[771]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:45:31.509428 systemd-networkd[771]: eth0: DHCPv4 address 168.119.51.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:45:31.623368 ignition[780]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 19:45:31.632805 ignition[780]: GET result: OK Oct 8 19:45:31.632972 ignition[780]: parsing config with SHA512: 4d515820cc8ee7565095ba62ed98546a9fe1450fc3c28282b9c5d51b9553d339e9db1b1e528820244797900ce3b7fc3330c55cfec6b32ea54704d0d7433cf110 Oct 8 19:45:31.639717 unknown[780]: fetched base config from "system" Oct 8 19:45:31.639747 unknown[780]: fetched base config from "system" Oct 8 19:45:31.640468 ignition[780]: fetch: fetch complete Oct 8 19:45:31.639756 unknown[780]: fetched user config from "hetzner" Oct 8 19:45:31.640476 ignition[780]: fetch: fetch passed Oct 8 19:45:31.643720 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:45:31.640535 ignition[780]: Ignition finished successfully Oct 8 19:45:31.654499 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:45:31.668583 ignition[788]: Ignition 2.18.0 Oct 8 19:45:31.668601 ignition[788]: Stage: kargs Oct 8 19:45:31.668898 ignition[788]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:45:31.668916 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:45:31.670776 ignition[788]: kargs: kargs passed Oct 8 19:45:31.670855 ignition[788]: Ignition finished successfully Oct 8 19:45:31.673365 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:45:31.688590 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:45:31.705257 ignition[795]: Ignition 2.18.0 Oct 8 19:45:31.705266 ignition[795]: Stage: disks Oct 8 19:45:31.705459 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:45:31.705469 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:45:31.708145 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:45:31.706386 ignition[795]: disks: disks passed Oct 8 19:45:31.709894 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:45:31.706432 ignition[795]: Ignition finished successfully Oct 8 19:45:31.712376 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:45:31.713272 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:45:31.714426 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:45:31.715370 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:45:31.720472 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:45:31.739368 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 19:45:31.742952 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:45:31.750517 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:45:31.800332 kernel: EXT4-fs (sda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:45:31.800868 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:45:31.802215 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:45:31.815478 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:45:31.820008 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:45:31.824760 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 19:45:31.828448 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:45:31.828486 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:45:31.830978 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:45:31.840791 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Oct 8 19:45:31.840965 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:45:31.841026 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:45:31.841038 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:45:31.843321 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:45:31.843352 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:45:31.844506 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:45:31.847234 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:45:31.911320 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:45:31.917652 coreos-metadata[814]: Oct 08 19:45:31.917 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 19:45:31.919603 coreos-metadata[814]: Oct 08 19:45:31.919 INFO Fetch successful Oct 8 19:45:31.919603 coreos-metadata[814]: Oct 08 19:45:31.919 INFO wrote hostname ci-3975-2-2-d-c7549a9f5e to /sysroot/etc/hostname Oct 8 19:45:31.922582 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:45:31.922624 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:45:31.928659 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:45:31.934125 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:45:32.025192 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:45:32.031432 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:45:32.035125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:45:32.041328 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:45:32.071093 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:45:32.075454 ignition[930]: INFO : Ignition 2.18.0 Oct 8 19:45:32.075454 ignition[930]: INFO : Stage: mount Oct 8 19:45:32.076756 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:45:32.076756 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:45:32.076756 ignition[930]: INFO : mount: mount passed Oct 8 19:45:32.076756 ignition[930]: INFO : Ignition finished successfully Oct 8 19:45:32.079310 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:45:32.083422 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:45:32.190235 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:45:32.202652 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:45:32.215815 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (943) Oct 8 19:45:32.215880 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:45:32.218558 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:45:32.218611 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:45:32.223430 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:45:32.223479 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:45:32.225807 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:45:32.247969 ignition[960]: INFO : Ignition 2.18.0 Oct 8 19:45:32.247969 ignition[960]: INFO : Stage: files Oct 8 19:45:32.250182 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:45:32.250182 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:45:32.250182 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:45:32.254379 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:45:32.254379 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:45:32.256153 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:45:32.257064 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:45:32.258296 unknown[960]: wrote ssh authorized keys file for user: core Oct 8 19:45:32.259421 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:45:32.260270 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:45:32.261076 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 8 19:45:32.261076 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:45:32.261076 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:45:32.376589 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 8 19:45:32.473488 systemd-networkd[771]: eth0: Gained IPv6LL Oct 8 19:45:32.582526 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:45:32.585437 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 19:45:32.921681 systemd-networkd[771]: eth1: Gained IPv6LL Oct 8 19:45:33.212254 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 8 19:45:33.545111 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:45:33.545111 ignition[960]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:45:33.548117 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:45:33.548117 ignition[960]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:45:33.563995 ignition[960]: INFO : files: files passed Oct 8 19:45:33.563995 ignition[960]: INFO : Ignition finished successfully Oct 8 19:45:33.551264 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:45:33.560573 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:45:33.565742 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:45:33.567994 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:45:33.569335 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:45:33.582133 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:45:33.582133 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:45:33.584256 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:45:33.586141 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:45:33.587143 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:45:33.590691 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:45:33.627160 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:45:33.627402 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:45:33.630311 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:45:33.632514 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:45:33.634119 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:45:33.639448 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:45:33.654978 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:45:33.661560 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:45:33.673806 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:45:33.675478 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:45:33.676238 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:45:33.677484 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:45:33.677671 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:45:33.679068 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:45:33.680407 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:45:33.681224 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:45:33.682087 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:45:33.683226 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:45:33.684377 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:45:33.685262 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:45:33.686236 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:45:33.687190 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:45:33.688061 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:45:33.688769 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:45:33.688962 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:45:33.690099 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:45:33.691148 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:45:33.692179 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:45:33.692632 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:45:33.693355 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:45:33.693516 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:45:33.694790 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:45:33.694946 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:45:33.695945 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:45:33.696107 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:45:33.696798 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 19:45:33.696942 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:45:33.705943 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:45:33.709127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:45:33.709633 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:45:33.709803 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:45:33.710805 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:45:33.710944 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:45:33.720490 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:45:33.720590 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:45:33.727682 ignition[1014]: INFO : Ignition 2.18.0 Oct 8 19:45:33.728601 ignition[1014]: INFO : Stage: umount Oct 8 19:45:33.728601 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:45:33.728601 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:45:33.730093 ignition[1014]: INFO : umount: umount passed Oct 8 19:45:33.730093 ignition[1014]: INFO : Ignition finished successfully Oct 8 19:45:33.734641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:45:33.735927 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:45:33.736074 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:45:33.738039 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:45:33.738152 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:45:33.740097 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:45:33.740189 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:45:33.741033 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:45:33.741075 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:45:33.741680 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:45:33.741715 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:45:33.742519 systemd[1]: Stopped target network.target - Network. Oct 8 19:45:33.743256 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:45:33.743320 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:45:33.744339 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:45:33.745115 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:45:33.748481 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:45:33.751686 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:45:33.755440 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:45:33.759136 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:45:33.759246 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:45:33.760555 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:45:33.760592 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:45:33.761556 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:45:33.761612 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:45:33.763276 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:45:33.763327 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:45:33.764922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:45:33.764964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:45:33.766613 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:45:33.767902 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:45:33.772360 systemd-networkd[771]: eth0: DHCPv6 lease lost Oct 8 19:45:33.777354 systemd-networkd[771]: eth1: DHCPv6 lease lost Oct 8 19:45:33.779781 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:45:33.780117 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:45:33.782621 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:45:33.782766 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:45:33.787028 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:45:33.787087 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:45:33.795439 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:45:33.796492 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:45:33.796562 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:45:33.798568 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:45:33.798616 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:45:33.799155 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:45:33.799192 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:45:33.799828 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:45:33.799868 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:45:33.801205 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:45:33.823816 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:45:33.824015 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:45:33.825694 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:45:33.825744 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:45:33.827074 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:45:33.827132 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:45:33.828196 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:45:33.828250 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:45:33.829875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:45:33.829921 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:45:33.831078 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:45:33.831121 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:45:33.838531 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:45:33.839880 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:45:33.840516 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:45:33.842149 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 8 19:45:33.842893 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:45:33.844395 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:45:33.844448 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:45:33.845681 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:45:33.845731 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:33.846783 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:45:33.848323 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:45:33.849023 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:45:33.849103 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:45:33.850686 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:45:33.858530 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:45:33.868752 systemd[1]: Switching root. Oct 8 19:45:33.899915 systemd-journald[237]: Journal stopped Oct 8 19:45:34.898439 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Oct 8 19:45:34.898524 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:45:34.898541 kernel: SELinux: policy capability open_perms=1 Oct 8 19:45:34.898551 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:45:34.898560 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:45:34.898569 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:45:34.898579 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:45:34.898593 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:45:34.898602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:45:34.898619 kernel: audit: type=1403 audit(1728416734.116:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:45:34.898631 systemd[1]: Successfully loaded SELinux policy in 33.196ms. Oct 8 19:45:34.898662 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.893ms. Oct 8 19:45:34.898674 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:45:34.898685 systemd[1]: Detected virtualization kvm. Oct 8 19:45:34.898695 systemd[1]: Detected architecture arm64. Oct 8 19:45:34.898705 systemd[1]: Detected first boot. Oct 8 19:45:34.898716 systemd[1]: Hostname set to . Oct 8 19:45:34.898726 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:45:34.898739 zram_generator::config[1073]: No configuration found. Oct 8 19:45:34.898750 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:45:34.898760 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:45:34.898770 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 19:45:34.898781 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:45:34.898792 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:45:34.898802 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:45:34.898812 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:45:34.898824 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:45:34.898835 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:45:34.898845 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:45:34.898858 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:45:34.898869 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:45:34.898879 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:45:34.898889 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:45:34.898899 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:45:34.898910 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:45:34.898922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:45:34.898932 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:45:34.898942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:45:34.898952 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:45:34.898962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:45:34.898973 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:45:34.898985 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:45:34.898995 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:45:34.899005 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:45:34.899016 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:45:34.899026 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:45:34.899037 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:45:34.899047 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:45:34.899058 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:45:34.899068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:45:34.899078 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:45:34.899090 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:45:34.899101 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:45:34.899111 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:45:34.899121 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:45:34.899133 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:45:34.899143 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:45:34.899153 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:45:34.899167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:45:34.899179 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:45:34.899189 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:45:34.899204 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:45:34.899214 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:45:34.899229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:45:34.899239 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:45:34.899252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:45:34.899264 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:45:34.899274 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 8 19:45:34.899354 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 8 19:45:34.899367 kernel: fuse: init (API version 7.39) Oct 8 19:45:34.899378 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:45:34.899388 kernel: loop: module loaded Oct 8 19:45:34.899399 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:45:34.899412 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:45:34.899422 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:45:34.899433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:45:34.899443 kernel: ACPI: bus type drm_connector registered Oct 8 19:45:34.899453 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:45:34.899464 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:45:34.899517 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:45:34.899533 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:45:34.899544 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:45:34.899580 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:45:34.899595 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:45:34.899606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:45:34.899616 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:45:34.899627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:45:34.899640 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:45:34.899650 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:45:34.899661 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:45:34.899672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:45:34.899684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:45:34.899694 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:45:34.899751 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:45:34.899765 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:45:34.899776 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:45:34.899789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:45:34.899799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:45:34.899844 systemd-journald[1165]: Collecting audit messages is disabled. Oct 8 19:45:34.899867 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:45:34.899880 systemd-journald[1165]: Journal started Oct 8 19:45:34.899906 systemd-journald[1165]: Runtime Journal (/run/log/journal/1ee28eda2127430f8ea6df9f6502ee2d) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:45:34.902317 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:45:34.901845 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:45:34.915172 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:45:34.921445 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:45:34.926434 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:45:34.928491 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:45:34.940482 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:45:34.944479 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:45:34.945883 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:45:34.960506 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:45:34.961130 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:45:34.967483 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:45:34.975530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:45:34.989732 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:45:34.990914 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:45:34.992731 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:45:35.003355 systemd-journald[1165]: Time spent on flushing to /var/log/journal/1ee28eda2127430f8ea6df9f6502ee2d is 32.670ms for 1114 entries. Oct 8 19:45:35.003355 systemd-journald[1165]: System Journal (/var/log/journal/1ee28eda2127430f8ea6df9f6502ee2d) is 8.0M, max 584.8M, 576.8M free. Oct 8 19:45:35.050768 systemd-journald[1165]: Received client request to flush runtime journal. Oct 8 19:45:35.008449 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:45:35.012084 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:45:35.020101 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:45:35.041009 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:45:35.042051 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:45:35.056074 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Oct 8 19:45:35.056088 systemd-tmpfiles[1210]: ACLs are not supported, ignoring. Oct 8 19:45:35.056575 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:45:35.062924 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:45:35.070613 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:45:35.102617 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:45:35.109556 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:45:35.124680 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Oct 8 19:45:35.125031 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Oct 8 19:45:35.132690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:45:35.509662 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:45:35.519503 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:45:35.541658 systemd-udevd[1243]: Using default interface naming scheme 'v255'. Oct 8 19:45:35.558229 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:45:35.575573 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:45:35.601078 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:45:35.623903 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Oct 8 19:45:35.646308 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1247) Oct 8 19:45:35.670399 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:45:35.781792 systemd-networkd[1253]: lo: Link UP Oct 8 19:45:35.782152 systemd-networkd[1253]: lo: Gained carrier Oct 8 19:45:35.783943 systemd-networkd[1253]: Enumeration completed Oct 8 19:45:35.786445 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:45:35.785493 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:45:35.787830 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:35.787837 systemd-networkd[1253]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:45:35.788791 systemd-networkd[1253]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:35.788797 systemd-networkd[1253]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:45:35.789484 systemd-networkd[1253]: eth0: Link UP Oct 8 19:45:35.789559 systemd-networkd[1253]: eth0: Gained carrier Oct 8 19:45:35.789575 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:35.792552 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:45:35.795445 systemd-networkd[1253]: eth1: Link UP Oct 8 19:45:35.795529 systemd-networkd[1253]: eth1: Gained carrier Oct 8 19:45:35.795588 systemd-networkd[1253]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:45:35.831316 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 8 19:45:35.831407 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 19:45:35.831425 kernel: [drm] features: -context_init Oct 8 19:45:35.839345 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1247) Oct 8 19:45:35.839458 kernel: [drm] number of scanouts: 1 Oct 8 19:45:35.839484 kernel: [drm] number of cap sets: 0 Oct 8 19:45:35.849312 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 19:45:35.852466 systemd-networkd[1253]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:45:35.877319 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 19:45:35.887652 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 19:45:35.905161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:45:35.917682 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:45:35.975436 systemd-networkd[1253]: eth0: DHCPv4 address 168.119.51.132/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:45:35.978166 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:45:36.038567 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:45:36.048537 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:45:36.070346 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:45:36.096956 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:45:36.100211 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:45:36.110443 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:45:36.118245 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:45:36.154662 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:45:36.156635 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:45:36.158193 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:45:36.158622 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:45:36.159966 systemd[1]: Reached target machines.target - Containers. Oct 8 19:45:36.163047 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:45:36.169449 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:45:36.172514 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:45:36.174387 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:45:36.177477 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:45:36.183625 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:45:36.195485 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:45:36.196989 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:45:36.215616 kernel: loop0: detected capacity change from 0 to 8 Oct 8 19:45:36.215791 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:45:36.215826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:45:36.233934 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:45:36.237724 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:45:36.238505 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:45:36.249308 kernel: loop1: detected capacity change from 0 to 194512 Oct 8 19:45:36.287449 kernel: loop2: detected capacity change from 0 to 113672 Oct 8 19:45:36.319344 kernel: loop3: detected capacity change from 0 to 59688 Oct 8 19:45:36.355386 kernel: loop4: detected capacity change from 0 to 8 Oct 8 19:45:36.357463 kernel: loop5: detected capacity change from 0 to 194512 Oct 8 19:45:36.370336 kernel: loop6: detected capacity change from 0 to 113672 Oct 8 19:45:36.379309 kernel: loop7: detected capacity change from 0 to 59688 Oct 8 19:45:36.387581 (sd-merge)[1326]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 19:45:36.388100 (sd-merge)[1326]: Merged extensions into '/usr'. Oct 8 19:45:36.394267 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:45:36.394313 systemd[1]: Reloading... Oct 8 19:45:36.445353 zram_generator::config[1353]: No configuration found. Oct 8 19:45:36.576642 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:45:36.584819 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:45:36.640181 systemd[1]: Reloading finished in 245 ms. Oct 8 19:45:36.657043 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:45:36.658992 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:45:36.669457 systemd[1]: Starting ensure-sysext.service... Oct 8 19:45:36.673525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:45:36.688528 systemd[1]: Reloading requested from client PID 1397 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:45:36.688569 systemd[1]: Reloading... Oct 8 19:45:36.709332 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:45:36.709643 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:45:36.710415 systemd-tmpfiles[1398]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:45:36.710678 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Oct 8 19:45:36.710728 systemd-tmpfiles[1398]: ACLs are not supported, ignoring. Oct 8 19:45:36.714542 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:45:36.714552 systemd-tmpfiles[1398]: Skipping /boot Oct 8 19:45:36.723723 systemd-tmpfiles[1398]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:45:36.723866 systemd-tmpfiles[1398]: Skipping /boot Oct 8 19:45:36.771870 zram_generator::config[1428]: No configuration found. Oct 8 19:45:36.872733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:45:36.928623 systemd[1]: Reloading finished in 239 ms. Oct 8 19:45:36.956733 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:45:36.974535 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:45:36.986512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:45:36.992151 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:45:37.007457 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:45:37.009381 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:45:37.019677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:45:37.025055 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:45:37.033255 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:45:37.048552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:45:37.049336 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:45:37.054264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:45:37.054434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:45:37.071880 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:45:37.073213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:45:37.077660 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:45:37.086183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:45:37.086388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:45:37.087480 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:45:37.087632 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:45:37.093786 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:45:37.108382 systemd[1]: Finished ensure-sysext.service. Oct 8 19:45:37.110812 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:45:37.116076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:45:37.122819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:45:37.127482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:45:37.131571 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:45:37.145417 systemd-networkd[1253]: eth0: Gained IPv6LL Oct 8 19:45:37.145590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:45:37.149591 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:45:37.159384 augenrules[1514]: No rules Oct 8 19:45:37.154994 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:45:37.175518 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:45:37.176258 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:45:37.177829 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:45:37.180663 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:45:37.180812 systemd-resolved[1475]: Positive Trust Anchors: Oct 8 19:45:37.181469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:45:37.181622 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:45:37.182930 systemd-resolved[1475]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:45:37.183012 systemd-resolved[1475]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:45:37.183991 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:45:37.184197 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:45:37.187877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:45:37.188067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:45:37.189211 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:45:37.191486 systemd-resolved[1475]: Using system hostname 'ci-3975-2-2-d-c7549a9f5e'. Oct 8 19:45:37.191522 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:45:37.200722 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:45:37.201866 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:45:37.206362 systemd[1]: Reached target network.target - Network. Oct 8 19:45:37.207214 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:45:37.207908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:45:37.208553 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:45:37.208632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:45:37.209401 systemd-networkd[1253]: eth1: Gained IPv6LL Oct 8 19:45:37.254002 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:45:37.255540 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:45:37.256887 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:45:37.258441 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:45:37.259851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:45:37.261239 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:45:37.261319 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:45:37.262181 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:45:37.263676 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:45:37.264936 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:45:37.265727 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:45:37.267774 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:45:37.270198 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:45:37.272869 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:45:37.275257 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:45:37.280479 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:45:37.280994 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:45:37.281718 systemd[1]: System is tainted: cgroupsv1 Oct 8 19:45:37.281764 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:45:37.281788 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:45:37.288649 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:45:37.292518 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:45:37.299590 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:45:37.303600 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:45:37.316381 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:45:37.317003 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:45:37.336255 jq[1543]: false Oct 8 19:45:37.334651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:45:37.341568 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:45:37.352979 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:45:37.357132 coreos-metadata[1541]: Oct 08 19:45:37.356 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 19:45:37.361637 coreos-metadata[1541]: Oct 08 19:45:37.357 INFO Fetch successful Oct 8 19:45:37.361637 coreos-metadata[1541]: Oct 08 19:45:37.358 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 19:45:37.361637 coreos-metadata[1541]: Oct 08 19:45:37.359 INFO Fetch successful Oct 8 19:45:37.363822 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:45:37.370223 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:45:37.370783 dbus-daemon[1542]: [system] SELinux support is enabled Oct 8 19:45:37.375371 extend-filesystems[1546]: Found loop4 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found loop5 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found loop6 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found loop7 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda1 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda2 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda3 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found usr Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda4 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda6 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda7 Oct 8 19:45:37.375371 extend-filesystems[1546]: Found sda9 Oct 8 19:45:37.375371 extend-filesystems[1546]: Checking size of /dev/sda9 Oct 8 19:45:37.390493 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:45:37.398063 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:45:37.398825 systemd-timesyncd[1518]: Contacted time server 158.101.188.125:123 (0.flatcar.pool.ntp.org). Oct 8 19:45:37.398885 systemd-timesyncd[1518]: Initial clock synchronization to Tue 2024-10-08 19:45:37.040874 UTC. Oct 8 19:45:37.403862 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:45:37.413592 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:45:37.424431 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:45:37.425649 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:45:37.430961 extend-filesystems[1546]: Resized partition /dev/sda9 Oct 8 19:45:37.467359 jq[1575]: true Oct 8 19:45:37.458077 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:45:37.467780 extend-filesystems[1580]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:45:37.485791 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 19:45:37.459499 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:45:37.460663 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:45:37.460941 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:45:37.463781 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:45:37.475187 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:45:37.475467 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:45:37.511309 jq[1592]: true Oct 8 19:45:37.527279 (ntainerd)[1594]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:45:37.553202 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:45:37.553233 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:45:37.557102 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:45:37.557129 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:45:37.567329 tar[1590]: linux-arm64/helm Oct 8 19:45:37.617271 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:45:37.618039 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:45:37.633357 update_engine[1570]: I1008 19:45:37.631598 1570 main.cc:92] Flatcar Update Engine starting Oct 8 19:45:37.645314 systemd-logind[1566]: New seat seat0. Oct 8 19:45:37.647383 systemd-logind[1566]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:45:37.647412 systemd-logind[1566]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 8 19:45:37.647663 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:45:37.663775 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:45:37.664787 update_engine[1570]: I1008 19:45:37.664114 1570 update_check_scheduler.cc:74] Next update check in 11m58s Oct 8 19:45:37.665749 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:45:37.685986 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1255) Oct 8 19:45:37.685071 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:45:37.708911 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 19:45:37.730331 extend-filesystems[1580]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 19:45:37.730331 extend-filesystems[1580]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 19:45:37.730331 extend-filesystems[1580]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 19:45:37.740400 extend-filesystems[1546]: Resized filesystem in /dev/sda9 Oct 8 19:45:37.740400 extend-filesystems[1546]: Found sr0 Oct 8 19:45:37.731028 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:45:37.747479 bash[1634]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:45:37.731363 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:45:37.748467 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:45:37.756522 systemd[1]: Starting sshkeys.service... Oct 8 19:45:37.783783 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:45:37.794038 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:45:37.856848 coreos-metadata[1645]: Oct 08 19:45:37.855 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 19:45:37.858668 coreos-metadata[1645]: Oct 08 19:45:37.858 INFO Fetch successful Oct 8 19:45:37.859955 unknown[1645]: wrote ssh authorized keys file for user: core Oct 8 19:45:37.897583 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:45:37.905819 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:45:37.913249 systemd[1]: Finished sshkeys.service. Oct 8 19:45:38.210873 locksmithd[1633]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:45:38.317424 containerd[1594]: time="2024-10-08T19:45:38.317316227Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:45:38.390531 containerd[1594]: time="2024-10-08T19:45:38.390079031Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:45:38.390531 containerd[1594]: time="2024-10-08T19:45:38.390124464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:45:38.394370 containerd[1594]: time="2024-10-08T19:45:38.394321826Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.394486171Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.394746961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.394764882Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.394832821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.394873210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.394885285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.394935647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.395154749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.395175230Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:45:38.395320 containerd[1594]: time="2024-10-08T19:45:38.395184859Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:45:38.396576 containerd[1594]: time="2024-10-08T19:45:38.396437988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:45:38.396576 containerd[1594]: time="2024-10-08T19:45:38.396462138Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:45:38.396576 containerd[1594]: time="2024-10-08T19:45:38.396531376Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:45:38.396576 containerd[1594]: time="2024-10-08T19:45:38.396545170Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:45:38.402666 containerd[1594]: time="2024-10-08T19:45:38.402576583Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:45:38.402666 containerd[1594]: time="2024-10-08T19:45:38.402618997Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:45:38.402666 containerd[1594]: time="2024-10-08T19:45:38.402632371Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.402822508Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.402847116Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.402858924Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.402933129Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403063734Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403079745Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403095984Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403110543Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403124108Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403143977Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403157619Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403169808Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403183029Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.403995 containerd[1594]: time="2024-10-08T19:45:38.403196174Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.404318 containerd[1594]: time="2024-10-08T19:45:38.403209242Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.404318 containerd[1594]: time="2024-10-08T19:45:38.403221011Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:45:38.404318 containerd[1594]: time="2024-10-08T19:45:38.403345044Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407413405Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407460061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407473587Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407498845Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407603772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407616344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407628877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407640646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407652033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407663381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407674577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407685506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407698727Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:45:38.409364 containerd[1594]: time="2024-10-08T19:45:38.407840642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409649 containerd[1594]: time="2024-10-08T19:45:38.407860550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409649 containerd[1594]: time="2024-10-08T19:45:38.407872510Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409649 containerd[1594]: time="2024-10-08T19:45:38.407883973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409649 containerd[1594]: time="2024-10-08T19:45:38.407895169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409649 containerd[1594]: time="2024-10-08T19:45:38.407909957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409649 containerd[1594]: time="2024-10-08T19:45:38.407923904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409649 containerd[1594]: time="2024-10-08T19:45:38.407934985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.408217173Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.408295047Z" level=info msg="Connect containerd service" Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.408330622Z" level=info msg="using legacy CRI server" Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.408337117Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.408474180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.409087314Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.409122200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.409139204Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.409149406Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:45:38.409818 containerd[1594]: time="2024-10-08T19:45:38.409162054Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.413598770Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.413645998Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.413906215Z" level=info msg="Start subscribing containerd event" Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.413997081Z" level=info msg="Start recovering state" Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.414053633Z" level=info msg="Start event monitor" Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.414063301Z" level=info msg="Start snapshots syncer" Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.414071516Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.414077515Z" level=info msg="Start streaming server" Oct 8 19:45:38.414311 containerd[1594]: time="2024-10-08T19:45:38.414196466Z" level=info msg="containerd successfully booted in 0.102971s" Oct 8 19:45:38.414332 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:45:38.520778 tar[1590]: linux-arm64/LICENSE Oct 8 19:45:38.520778 tar[1590]: linux-arm64/README.md Oct 8 19:45:38.544564 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:45:38.555611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:45:38.556930 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:45:39.131467 kubelet[1677]: E1008 19:45:39.131362 1677 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:45:39.133809 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:45:39.134105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:45:40.133456 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:45:40.161774 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:45:40.172691 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:45:40.180857 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:45:40.181275 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:45:40.194615 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:45:40.206437 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:45:40.214705 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:45:40.217215 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:45:40.218432 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:45:40.219093 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:45:40.219926 systemd[1]: Startup finished in 6.184s (kernel) + 6.138s (userspace) = 12.322s. Oct 8 19:45:49.215784 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:45:49.224580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:45:49.341797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:45:49.354781 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:45:49.412433 kubelet[1722]: E1008 19:45:49.412337 1722 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:45:49.417467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:45:49.417704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:45:59.465320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:45:59.474662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:45:59.582483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:45:59.587197 (kubelet)[1744]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:45:59.644555 kubelet[1744]: E1008 19:45:59.644423 1744 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:45:59.647531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:45:59.647673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:09.715350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:46:09.725591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:09.847547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:09.859760 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:09.909817 kubelet[1765]: E1008 19:46:09.909749 1765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:09.912937 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:09.913124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:19.965216 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 19:46:19.975702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:20.098503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:20.115011 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:20.169538 kubelet[1786]: E1008 19:46:20.169445 1786 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:20.172855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:20.173042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:22.906423 update_engine[1570]: I1008 19:46:22.905407 1570 update_attempter.cc:509] Updating boot flags... Oct 8 19:46:22.956362 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1804) Oct 8 19:46:23.029395 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1806) Oct 8 19:46:30.215187 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 19:46:30.222694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:30.347549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:30.365831 (kubelet)[1825]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:30.415479 kubelet[1825]: E1008 19:46:30.415411 1825 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:30.418418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:30.418606 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:40.464929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 19:46:40.471606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:40.592540 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:40.597926 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:40.654938 kubelet[1846]: E1008 19:46:40.654860 1846 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:40.658759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:40.659034 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:46:50.715362 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 19:46:50.725587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:46:50.837521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:46:50.851117 (kubelet)[1867]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:46:50.898848 kubelet[1867]: E1008 19:46:50.898748 1867 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:46:50.901380 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:46:50.901558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:00.965117 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 19:47:00.971657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:01.098823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:01.102851 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:01.151885 kubelet[1888]: E1008 19:47:01.151796 1888 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:01.155763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:01.155943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:11.215149 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 19:47:11.223962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:11.352525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:11.368098 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:11.418375 kubelet[1909]: E1008 19:47:11.418244 1909 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:11.420827 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:11.420989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:21.465375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 19:47:21.473629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:21.609626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:21.620733 (kubelet)[1931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:21.679463 kubelet[1931]: E1008 19:47:21.679374 1931 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:21.682160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:21.682332 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:28.826988 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:47:28.837017 systemd[1]: Started sshd@0-168.119.51.132:22-139.178.89.65:46026.service - OpenSSH per-connection server daemon (139.178.89.65:46026). Oct 8 19:47:29.852826 sshd[1940]: Accepted publickey for core from 139.178.89.65 port 46026 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:47:29.856125 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:29.869943 systemd-logind[1566]: New session 1 of user core. Oct 8 19:47:29.870560 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:47:29.876715 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:47:29.891607 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:47:29.898704 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:47:29.910064 (systemd)[1946]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:30.016637 systemd[1946]: Queued start job for default target default.target. Oct 8 19:47:30.017011 systemd[1946]: Created slice app.slice - User Application Slice. Oct 8 19:47:30.017030 systemd[1946]: Reached target paths.target - Paths. Oct 8 19:47:30.017042 systemd[1946]: Reached target timers.target - Timers. Oct 8 19:47:30.027515 systemd[1946]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:47:30.034650 systemd[1946]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:47:30.036737 systemd[1946]: Reached target sockets.target - Sockets. Oct 8 19:47:30.036981 systemd[1946]: Reached target basic.target - Basic System. Oct 8 19:47:30.037246 systemd[1946]: Reached target default.target - Main User Target. Oct 8 19:47:30.037586 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:47:30.039394 systemd[1946]: Startup finished in 122ms. Oct 8 19:47:30.040699 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:47:30.732650 systemd[1]: Started sshd@1-168.119.51.132:22-139.178.89.65:46040.service - OpenSSH per-connection server daemon (139.178.89.65:46040). Oct 8 19:47:31.708642 sshd[1958]: Accepted publickey for core from 139.178.89.65 port 46040 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:47:31.710993 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:31.712453 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 19:47:31.719510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:31.725147 systemd-logind[1566]: New session 2 of user core. Oct 8 19:47:31.736615 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:47:31.841547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:31.853746 (kubelet)[1974]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:31.897036 kubelet[1974]: E1008 19:47:31.896968 1974 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:31.901935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:31.902488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:32.381733 sshd[1958]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:32.385752 systemd[1]: sshd@1-168.119.51.132:22-139.178.89.65:46040.service: Deactivated successfully. Oct 8 19:47:32.391308 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:47:32.393129 systemd-logind[1566]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:47:32.394944 systemd-logind[1566]: Removed session 2. Oct 8 19:47:32.546669 systemd[1]: Started sshd@2-168.119.51.132:22-139.178.89.65:46054.service - OpenSSH per-connection server daemon (139.178.89.65:46054). Oct 8 19:47:33.505608 sshd[1988]: Accepted publickey for core from 139.178.89.65 port 46054 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:47:33.507645 sshd[1988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:33.515706 systemd-logind[1566]: New session 3 of user core. Oct 8 19:47:33.518635 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:47:34.166360 sshd[1988]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:34.172256 systemd[1]: sshd@2-168.119.51.132:22-139.178.89.65:46054.service: Deactivated successfully. Oct 8 19:47:34.173575 systemd-logind[1566]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:47:34.177055 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:47:34.177955 systemd-logind[1566]: Removed session 3. Oct 8 19:47:34.335809 systemd[1]: Started sshd@3-168.119.51.132:22-139.178.89.65:46060.service - OpenSSH per-connection server daemon (139.178.89.65:46060). Oct 8 19:47:35.316206 sshd[1996]: Accepted publickey for core from 139.178.89.65 port 46060 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:47:35.318456 sshd[1996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:35.325889 systemd-logind[1566]: New session 4 of user core. Oct 8 19:47:35.332727 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:47:36.000613 sshd[1996]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:36.006129 systemd[1]: sshd@3-168.119.51.132:22-139.178.89.65:46060.service: Deactivated successfully. Oct 8 19:47:36.010620 systemd-logind[1566]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:47:36.011413 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:47:36.012836 systemd-logind[1566]: Removed session 4. Oct 8 19:47:36.178820 systemd[1]: Started sshd@4-168.119.51.132:22-139.178.89.65:47444.service - OpenSSH per-connection server daemon (139.178.89.65:47444). Oct 8 19:47:37.188670 sshd[2004]: Accepted publickey for core from 139.178.89.65 port 47444 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:47:37.191065 sshd[2004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:37.198058 systemd-logind[1566]: New session 5 of user core. Oct 8 19:47:37.204729 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:47:37.752465 sudo[2008]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:47:37.752709 sudo[2008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:47:37.767629 sudo[2008]: pam_unix(sudo:session): session closed for user root Oct 8 19:47:37.933763 sshd[2004]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:37.938853 systemd-logind[1566]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:47:37.942641 systemd[1]: sshd@4-168.119.51.132:22-139.178.89.65:47444.service: Deactivated successfully. Oct 8 19:47:37.946021 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:47:37.947550 systemd-logind[1566]: Removed session 5. Oct 8 19:47:38.103652 systemd[1]: Started sshd@5-168.119.51.132:22-139.178.89.65:47460.service - OpenSSH per-connection server daemon (139.178.89.65:47460). Oct 8 19:47:39.101916 sshd[2013]: Accepted publickey for core from 139.178.89.65 port 47460 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:47:39.104064 sshd[2013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:39.111770 systemd-logind[1566]: New session 6 of user core. Oct 8 19:47:39.116814 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:47:39.632231 sudo[2018]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:47:39.632647 sudo[2018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:47:39.637188 sudo[2018]: pam_unix(sudo:session): session closed for user root Oct 8 19:47:39.644057 sudo[2017]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:47:39.644536 sudo[2017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:47:39.659757 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:47:39.674313 auditctl[2021]: No rules Oct 8 19:47:39.674974 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:47:39.675453 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:47:39.687078 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:47:39.718790 augenrules[2040]: No rules Oct 8 19:47:39.721024 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:47:39.723777 sudo[2017]: pam_unix(sudo:session): session closed for user root Oct 8 19:47:39.888479 sshd[2013]: pam_unix(sshd:session): session closed for user core Oct 8 19:47:39.893720 systemd[1]: sshd@5-168.119.51.132:22-139.178.89.65:47460.service: Deactivated successfully. Oct 8 19:47:39.897501 systemd-logind[1566]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:47:39.898973 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:47:39.901839 systemd-logind[1566]: Removed session 6. Oct 8 19:47:40.056669 systemd[1]: Started sshd@6-168.119.51.132:22-139.178.89.65:47470.service - OpenSSH per-connection server daemon (139.178.89.65:47470). Oct 8 19:47:41.057493 sshd[2049]: Accepted publickey for core from 139.178.89.65 port 47470 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:47:41.059525 sshd[2049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:47:41.067456 systemd-logind[1566]: New session 7 of user core. Oct 8 19:47:41.073628 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:47:41.590221 sudo[2053]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:47:41.590539 sudo[2053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:47:41.712682 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:47:41.712916 (dockerd)[2062]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:47:41.964985 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 19:47:41.975016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:41.980234 dockerd[2062]: time="2024-10-08T19:47:41.980168211Z" level=info msg="Starting up" Oct 8 19:47:42.040507 systemd[1]: var-lib-docker-metacopy\x2dcheck633712379-merged.mount: Deactivated successfully. Oct 8 19:47:42.053455 dockerd[2062]: time="2024-10-08T19:47:42.052991832Z" level=info msg="Loading containers: start." Oct 8 19:47:42.140720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:42.143756 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:42.206403 kubelet[2104]: E1008 19:47:42.206312 2104 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:42.209392 kernel: Initializing XFRM netlink socket Oct 8 19:47:42.212718 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:42.213298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:42.316807 systemd-networkd[1253]: docker0: Link UP Oct 8 19:47:42.342114 dockerd[2062]: time="2024-10-08T19:47:42.341281063Z" level=info msg="Loading containers: done." Oct 8 19:47:42.419427 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2211369344-merged.mount: Deactivated successfully. Oct 8 19:47:42.422410 dockerd[2062]: time="2024-10-08T19:47:42.422359275Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:47:42.422603 dockerd[2062]: time="2024-10-08T19:47:42.422569962Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:47:42.422725 dockerd[2062]: time="2024-10-08T19:47:42.422699646Z" level=info msg="Daemon has completed initialization" Oct 8 19:47:42.453175 dockerd[2062]: time="2024-10-08T19:47:42.453115602Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:47:42.453349 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:47:43.494801 containerd[1594]: time="2024-10-08T19:47:43.494763431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:47:44.180970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount979097021.mount: Deactivated successfully. Oct 8 19:47:46.121316 containerd[1594]: time="2024-10-08T19:47:46.121238066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:46.122483 containerd[1594]: time="2024-10-08T19:47:46.122450826Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286150" Oct 8 19:47:46.123561 containerd[1594]: time="2024-10-08T19:47:46.123503821Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:46.127394 containerd[1594]: time="2024-10-08T19:47:46.127353148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:46.128794 containerd[1594]: time="2024-10-08T19:47:46.128621749Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.633817917s" Oct 8 19:47:46.128794 containerd[1594]: time="2024-10-08T19:47:46.128661111Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 19:47:46.152099 containerd[1594]: time="2024-10-08T19:47:46.151842635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:47:48.655248 containerd[1594]: time="2024-10-08T19:47:48.654236953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:48.656655 containerd[1594]: time="2024-10-08T19:47:48.656623591Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374224" Oct 8 19:47:48.657617 containerd[1594]: time="2024-10-08T19:47:48.657555022Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:48.662002 containerd[1594]: time="2024-10-08T19:47:48.661942407Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:48.664720 containerd[1594]: time="2024-10-08T19:47:48.664647937Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 2.51275182s" Oct 8 19:47:48.664720 containerd[1594]: time="2024-10-08T19:47:48.664715179Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 19:47:48.691816 containerd[1594]: time="2024-10-08T19:47:48.691767554Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:47:49.710752 systemd[1]: Started sshd@7-168.119.51.132:22-202.157.186.116:59806.service - OpenSSH per-connection server daemon (202.157.186.116:59806). Oct 8 19:47:50.376306 containerd[1594]: time="2024-10-08T19:47:50.374914859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:50.376837 containerd[1594]: time="2024-10-08T19:47:50.376795601Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751237" Oct 8 19:47:50.377415 containerd[1594]: time="2024-10-08T19:47:50.377055170Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:50.380465 containerd[1594]: time="2024-10-08T19:47:50.380387201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:50.382237 containerd[1594]: time="2024-10-08T19:47:50.381981814Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.690161417s" Oct 8 19:47:50.382237 containerd[1594]: time="2024-10-08T19:47:50.382030815Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 19:47:50.403439 containerd[1594]: time="2024-10-08T19:47:50.403399084Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:47:50.927585 sshd[2282]: Received disconnect from 202.157.186.116 port 59806:11: Bye Bye [preauth] Oct 8 19:47:50.927585 sshd[2282]: Disconnected from authenticating user root 202.157.186.116 port 59806 [preauth] Oct 8 19:47:50.932645 systemd[1]: sshd@7-168.119.51.132:22-202.157.186.116:59806.service: Deactivated successfully. Oct 8 19:47:51.783188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995002431.mount: Deactivated successfully. Oct 8 19:47:52.215780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 19:47:52.225863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:47:52.328521 containerd[1594]: time="2024-10-08T19:47:52.328458153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:52.329281 containerd[1594]: time="2024-10-08T19:47:52.329246619Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254064" Oct 8 19:47:52.330365 containerd[1594]: time="2024-10-08T19:47:52.330330535Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:52.336946 containerd[1594]: time="2024-10-08T19:47:52.335212978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:52.339889 containerd[1594]: time="2024-10-08T19:47:52.339844572Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.936399685s" Oct 8 19:47:52.340051 containerd[1594]: time="2024-10-08T19:47:52.340031458Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 19:47:52.355501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:47:52.366023 (kubelet)[2320]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:47:52.375845 containerd[1594]: time="2024-10-08T19:47:52.375748967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:47:52.417852 kubelet[2320]: E1008 19:47:52.417719 2320 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:47:52.420444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:47:52.420852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:47:53.075900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2173289392.mount: Deactivated successfully. Oct 8 19:47:53.893548 containerd[1594]: time="2024-10-08T19:47:53.893486108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:53.894734 containerd[1594]: time="2024-10-08T19:47:53.894640346Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Oct 8 19:47:53.895603 containerd[1594]: time="2024-10-08T19:47:53.895546697Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:53.899329 containerd[1594]: time="2024-10-08T19:47:53.899245180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:53.902931 containerd[1594]: time="2024-10-08T19:47:53.902179318Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.526368948s" Oct 8 19:47:53.902931 containerd[1594]: time="2024-10-08T19:47:53.902248160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:47:53.928231 containerd[1594]: time="2024-10-08T19:47:53.928177425Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:47:54.516907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2979988178.mount: Deactivated successfully. Oct 8 19:47:54.522656 containerd[1594]: time="2024-10-08T19:47:54.522604511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:54.523585 containerd[1594]: time="2024-10-08T19:47:54.523530502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Oct 8 19:47:54.524488 containerd[1594]: time="2024-10-08T19:47:54.524435812Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:54.528521 containerd[1594]: time="2024-10-08T19:47:54.528447746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:54.529644 containerd[1594]: time="2024-10-08T19:47:54.529507141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 601.273635ms" Oct 8 19:47:54.529644 containerd[1594]: time="2024-10-08T19:47:54.529551223Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:47:54.557626 containerd[1594]: time="2024-10-08T19:47:54.557326750Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:47:55.184190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351534159.mount: Deactivated successfully. Oct 8 19:47:58.479786 containerd[1594]: time="2024-10-08T19:47:58.479721364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:58.480979 containerd[1594]: time="2024-10-08T19:47:58.480942485Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Oct 8 19:47:58.481680 containerd[1594]: time="2024-10-08T19:47:58.481615547Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:58.485246 containerd[1594]: time="2024-10-08T19:47:58.485188347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:47:58.487049 containerd[1594]: time="2024-10-08T19:47:58.486844003Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.929476411s" Oct 8 19:47:58.487049 containerd[1594]: time="2024-10-08T19:47:58.486900125Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 19:48:02.464804 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Oct 8 19:48:02.472651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:02.600532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:02.602939 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:48:02.651487 kubelet[2505]: E1008 19:48:02.651428 2505 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:48:02.656590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:48:02.656733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:48:04.815471 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:04.822695 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:04.848520 systemd[1]: Reloading requested from client PID 2521 ('systemctl') (unit session-7.scope)... Oct 8 19:48:04.848703 systemd[1]: Reloading... Oct 8 19:48:04.978318 zram_generator::config[2559]: No configuration found. Oct 8 19:48:05.071804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:48:05.138221 systemd[1]: Reloading finished in 288 ms. Oct 8 19:48:05.186272 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:48:05.186368 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:48:05.186960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:05.199851 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:05.313461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:05.327054 (kubelet)[2618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:48:05.384139 kubelet[2618]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:05.384139 kubelet[2618]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:48:05.384139 kubelet[2618]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:05.384625 kubelet[2618]: I1008 19:48:05.384185 2618 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:48:06.051484 kubelet[2618]: I1008 19:48:06.051407 2618 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:48:06.051484 kubelet[2618]: I1008 19:48:06.051448 2618 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:48:06.051722 kubelet[2618]: I1008 19:48:06.051693 2618 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:48:06.076994 kubelet[2618]: E1008 19:48:06.076889 2618 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://168.119.51.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.077182 kubelet[2618]: I1008 19:48:06.077168 2618 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:48:06.090637 kubelet[2618]: I1008 19:48:06.090598 2618 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:48:06.090985 kubelet[2618]: I1008 19:48:06.090970 2618 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:48:06.091179 kubelet[2618]: I1008 19:48:06.091160 2618 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:48:06.091339 kubelet[2618]: I1008 19:48:06.091183 2618 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:48:06.091339 kubelet[2618]: I1008 19:48:06.091193 2618 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:48:06.092747 kubelet[2618]: I1008 19:48:06.092703 2618 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:06.097486 kubelet[2618]: I1008 19:48:06.097448 2618 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:48:06.097486 kubelet[2618]: I1008 19:48:06.097488 2618 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:48:06.097935 kubelet[2618]: I1008 19:48:06.097514 2618 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:48:06.097935 kubelet[2618]: I1008 19:48:06.097586 2618 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:48:06.099833 kubelet[2618]: I1008 19:48:06.099811 2618 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:48:06.100594 kubelet[2618]: I1008 19:48:06.100405 2618 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:48:06.101350 kubelet[2618]: W1008 19:48:06.101325 2618 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:48:06.102501 kubelet[2618]: I1008 19:48:06.102477 2618 server.go:1256] "Started kubelet" Oct 8 19:48:06.103262 kubelet[2618]: W1008 19:48:06.102727 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://168.119.51.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-d-c7549a9f5e&limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.103262 kubelet[2618]: E1008 19:48:06.102801 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.51.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-d-c7549a9f5e&limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.106160 kubelet[2618]: W1008 19:48:06.106105 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://168.119.51.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.106160 kubelet[2618]: E1008 19:48:06.106163 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.51.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.106342 kubelet[2618]: I1008 19:48:06.106322 2618 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:48:06.107057 kubelet[2618]: I1008 19:48:06.107030 2618 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:48:06.107262 kubelet[2618]: I1008 19:48:06.107227 2618 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:48:06.107623 kubelet[2618]: I1008 19:48:06.107602 2618 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:48:06.109478 kubelet[2618]: E1008 19:48:06.109451 2618 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://168.119.51.132:6443/api/v1/namespaces/default/events\": dial tcp 168.119.51.132:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-2-2-d-c7549a9f5e.17fc920017ec4db6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-d-c7549a9f5e,UID:ci-3975-2-2-d-c7549a9f5e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-d-c7549a9f5e,},FirstTimestamp:2024-10-08 19:48:06.102445494 +0000 UTC m=+0.769585396,LastTimestamp:2024-10-08 19:48:06.102445494 +0000 UTC m=+0.769585396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-d-c7549a9f5e,}" Oct 8 19:48:06.112717 kubelet[2618]: I1008 19:48:06.112483 2618 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:48:06.116562 kubelet[2618]: E1008 19:48:06.116373 2618 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975-2-2-d-c7549a9f5e\" not found" Oct 8 19:48:06.116562 kubelet[2618]: I1008 19:48:06.116409 2618 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:48:06.116562 kubelet[2618]: I1008 19:48:06.116497 2618 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:48:06.116562 kubelet[2618]: I1008 19:48:06.116538 2618 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:48:06.117171 kubelet[2618]: W1008 19:48:06.116888 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://168.119.51.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.117171 kubelet[2618]: E1008 19:48:06.116952 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.51.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.117171 kubelet[2618]: E1008 19:48:06.117162 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-d-c7549a9f5e?timeout=10s\": dial tcp 168.119.51.132:6443: connect: connection refused" interval="200ms" Oct 8 19:48:06.118104 kubelet[2618]: I1008 19:48:06.118079 2618 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:48:06.118183 kubelet[2618]: I1008 19:48:06.118166 2618 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:48:06.120391 kubelet[2618]: E1008 19:48:06.119569 2618 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:48:06.120992 kubelet[2618]: I1008 19:48:06.120959 2618 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:48:06.134856 kubelet[2618]: I1008 19:48:06.134824 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:48:06.136596 kubelet[2618]: I1008 19:48:06.136566 2618 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:48:06.136707 kubelet[2618]: I1008 19:48:06.136697 2618 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:48:06.136788 kubelet[2618]: I1008 19:48:06.136780 2618 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:48:06.136886 kubelet[2618]: E1008 19:48:06.136877 2618 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:48:06.146841 kubelet[2618]: W1008 19:48:06.146795 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://168.119.51.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.147044 kubelet[2618]: E1008 19:48:06.147031 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.51.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.155003 kubelet[2618]: I1008 19:48:06.154972 2618 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:48:06.155003 kubelet[2618]: I1008 19:48:06.155001 2618 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:48:06.155139 kubelet[2618]: I1008 19:48:06.155022 2618 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:06.157321 kubelet[2618]: I1008 19:48:06.157296 2618 policy_none.go:49] "None policy: Start" Oct 8 19:48:06.158027 kubelet[2618]: I1008 19:48:06.158008 2618 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:48:06.158074 kubelet[2618]: I1008 19:48:06.158060 2618 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:48:06.163980 kubelet[2618]: I1008 19:48:06.163542 2618 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:48:06.163980 kubelet[2618]: I1008 19:48:06.163897 2618 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:48:06.166674 kubelet[2618]: E1008 19:48:06.166650 2618 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-2-2-d-c7549a9f5e\" not found" Oct 8 19:48:06.220524 kubelet[2618]: I1008 19:48:06.220220 2618 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.221478 kubelet[2618]: E1008 19:48:06.221403 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.132:6443/api/v1/nodes\": dial tcp 168.119.51.132:6443: connect: connection refused" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.237779 kubelet[2618]: I1008 19:48:06.237738 2618 topology_manager.go:215] "Topology Admit Handler" podUID="623a75cd60f6b902e7e8da60bf63c49a" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.241011 kubelet[2618]: I1008 19:48:06.240628 2618 topology_manager.go:215] "Topology Admit Handler" podUID="d13f7740e414f934df6a8abebb45403e" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.245619 kubelet[2618]: I1008 19:48:06.245574 2618 topology_manager.go:215] "Topology Admit Handler" podUID="9b9dc699018353a28cc6ff3302be5d83" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.318011 kubelet[2618]: E1008 19:48:06.317856 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-d-c7549a9f5e?timeout=10s\": dial tcp 168.119.51.132:6443: connect: connection refused" interval="400ms" Oct 8 19:48:06.318360 kubelet[2618]: I1008 19:48:06.318056 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.318665 kubelet[2618]: I1008 19:48:06.318553 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.318896 kubelet[2618]: I1008 19:48:06.318752 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.319227 kubelet[2618]: I1008 19:48:06.319053 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/623a75cd60f6b902e7e8da60bf63c49a-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-d-c7549a9f5e\" (UID: \"623a75cd60f6b902e7e8da60bf63c49a\") " pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.319492 kubelet[2618]: I1008 19:48:06.319360 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/623a75cd60f6b902e7e8da60bf63c49a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-d-c7549a9f5e\" (UID: \"623a75cd60f6b902e7e8da60bf63c49a\") " pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.319492 kubelet[2618]: I1008 19:48:06.319460 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.319973 kubelet[2618]: I1008 19:48:06.319812 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.319973 kubelet[2618]: I1008 19:48:06.319901 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b9dc699018353a28cc6ff3302be5d83-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-d-c7549a9f5e\" (UID: \"9b9dc699018353a28cc6ff3302be5d83\") " pod="kube-system/kube-scheduler-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.320356 kubelet[2618]: I1008 19:48:06.320170 2618 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/623a75cd60f6b902e7e8da60bf63c49a-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-d-c7549a9f5e\" (UID: \"623a75cd60f6b902e7e8da60bf63c49a\") " pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.424588 kubelet[2618]: I1008 19:48:06.424522 2618 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.425347 kubelet[2618]: E1008 19:48:06.425063 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.132:6443/api/v1/nodes\": dial tcp 168.119.51.132:6443: connect: connection refused" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.550152 containerd[1594]: time="2024-10-08T19:48:06.550039648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-d-c7549a9f5e,Uid:623a75cd60f6b902e7e8da60bf63c49a,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:06.553098 containerd[1594]: time="2024-10-08T19:48:06.552827823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-d-c7549a9f5e,Uid:d13f7740e414f934df6a8abebb45403e,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:06.560098 containerd[1594]: time="2024-10-08T19:48:06.559745777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-d-c7549a9f5e,Uid:9b9dc699018353a28cc6ff3302be5d83,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:06.719741 kubelet[2618]: E1008 19:48:06.719689 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-d-c7549a9f5e?timeout=10s\": dial tcp 168.119.51.132:6443: connect: connection refused" interval="800ms" Oct 8 19:48:06.827571 kubelet[2618]: I1008 19:48:06.827524 2618 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.827972 kubelet[2618]: E1008 19:48:06.827930 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.132:6443/api/v1/nodes\": dial tcp 168.119.51.132:6443: connect: connection refused" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:06.946517 kubelet[2618]: W1008 19:48:06.946371 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://168.119.51.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:06.946517 kubelet[2618]: E1008 19:48:06.946456 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://168.119.51.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:07.178418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67382751.mount: Deactivated successfully. Oct 8 19:48:07.185436 containerd[1594]: time="2024-10-08T19:48:07.185372605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:07.187154 containerd[1594]: time="2024-10-08T19:48:07.187101064Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Oct 8 19:48:07.187779 kubelet[2618]: W1008 19:48:07.187702 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://168.119.51.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-d-c7549a9f5e&limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:07.187779 kubelet[2618]: E1008 19:48:07.187770 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://168.119.51.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-d-c7549a9f5e&limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:07.190522 containerd[1594]: time="2024-10-08T19:48:07.190469378Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:07.191703 containerd[1594]: time="2024-10-08T19:48:07.191632537Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:07.193120 containerd[1594]: time="2024-10-08T19:48:07.193080226Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:48:07.194044 containerd[1594]: time="2024-10-08T19:48:07.193949776Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:07.194044 containerd[1594]: time="2024-10-08T19:48:07.194008218Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:48:07.200367 containerd[1594]: time="2024-10-08T19:48:07.200281270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:48:07.202274 containerd[1594]: time="2024-10-08T19:48:07.201967727Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 648.975699ms" Oct 8 19:48:07.205463 containerd[1594]: time="2024-10-08T19:48:07.205402164Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 655.191189ms" Oct 8 19:48:07.207029 containerd[1594]: time="2024-10-08T19:48:07.206662006Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 646.794585ms" Oct 8 19:48:07.383570 containerd[1594]: time="2024-10-08T19:48:07.383433437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:07.383896 containerd[1594]: time="2024-10-08T19:48:07.383624644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:07.384057 containerd[1594]: time="2024-10-08T19:48:07.383993056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:07.384057 containerd[1594]: time="2024-10-08T19:48:07.384047938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:07.390313 containerd[1594]: time="2024-10-08T19:48:07.390195826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:07.391137 containerd[1594]: time="2024-10-08T19:48:07.391012734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:07.391726 containerd[1594]: time="2024-10-08T19:48:07.391583713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:07.391726 containerd[1594]: time="2024-10-08T19:48:07.391659596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:07.394403 containerd[1594]: time="2024-10-08T19:48:07.394168881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:07.394403 containerd[1594]: time="2024-10-08T19:48:07.394222723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:07.394403 containerd[1594]: time="2024-10-08T19:48:07.394237443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:07.394403 containerd[1594]: time="2024-10-08T19:48:07.394246964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:07.432840 kubelet[2618]: W1008 19:48:07.431973 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://168.119.51.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:07.432840 kubelet[2618]: E1008 19:48:07.432018 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://168.119.51.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:07.468056 containerd[1594]: time="2024-10-08T19:48:07.467911020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-d-c7549a9f5e,Uid:d13f7740e414f934df6a8abebb45403e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2b49e097c4f5db73f7ee695dbe811f2292db963890832f03d03e4efaf9aac67\"" Oct 8 19:48:07.471847 containerd[1594]: time="2024-10-08T19:48:07.470652393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-d-c7549a9f5e,Uid:9b9dc699018353a28cc6ff3302be5d83,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba902c3a94ba0fedb9bf92f6a322804609434b5467c768f59bfd754ddd300123\"" Oct 8 19:48:07.472512 containerd[1594]: time="2024-10-08T19:48:07.472472854Z" level=info msg="CreateContainer within sandbox \"a2b49e097c4f5db73f7ee695dbe811f2292db963890832f03d03e4efaf9aac67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:48:07.474925 containerd[1594]: time="2024-10-08T19:48:07.474883136Z" level=info msg="CreateContainer within sandbox \"ba902c3a94ba0fedb9bf92f6a322804609434b5467c768f59bfd754ddd300123\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:48:07.478854 containerd[1594]: time="2024-10-08T19:48:07.478810989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-d-c7549a9f5e,Uid:623a75cd60f6b902e7e8da60bf63c49a,Namespace:kube-system,Attempt:0,} returns sandbox id \"04e71346107ee1c1d98343d1ae38b562f48c4f0a96e96e282a6bb6094b9518b9\"" Oct 8 19:48:07.485951 containerd[1594]: time="2024-10-08T19:48:07.485913230Z" level=info msg="CreateContainer within sandbox \"04e71346107ee1c1d98343d1ae38b562f48c4f0a96e96e282a6bb6094b9518b9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:48:07.501277 containerd[1594]: time="2024-10-08T19:48:07.501218669Z" level=info msg="CreateContainer within sandbox \"ba902c3a94ba0fedb9bf92f6a322804609434b5467c768f59bfd754ddd300123\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f2ee9757760d32f84bf323da98c248bfc6b21d355688122bf9d5411ad053919\"" Oct 8 19:48:07.502676 containerd[1594]: time="2024-10-08T19:48:07.502556634Z" level=info msg="StartContainer for \"6f2ee9757760d32f84bf323da98c248bfc6b21d355688122bf9d5411ad053919\"" Oct 8 19:48:07.506585 containerd[1594]: time="2024-10-08T19:48:07.506471167Z" level=info msg="CreateContainer within sandbox \"a2b49e097c4f5db73f7ee695dbe811f2292db963890832f03d03e4efaf9aac67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3fd16a736527300208545e5d4c0cef113b947226b9602fb31d6d6624f7099f59\"" Oct 8 19:48:07.507280 containerd[1594]: time="2024-10-08T19:48:07.507058187Z" level=info msg="StartContainer for \"3fd16a736527300208545e5d4c0cef113b947226b9602fb31d6d6624f7099f59\"" Oct 8 19:48:07.509872 containerd[1594]: time="2024-10-08T19:48:07.509831121Z" level=info msg="CreateContainer within sandbox \"04e71346107ee1c1d98343d1ae38b562f48c4f0a96e96e282a6bb6094b9518b9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e4a177a6304ef4022a20b3b2916874dd4bd8eabc17aa3d52ec231bd5f560ed3e\"" Oct 8 19:48:07.510454 containerd[1594]: time="2024-10-08T19:48:07.510427421Z" level=info msg="StartContainer for \"e4a177a6304ef4022a20b3b2916874dd4bd8eabc17aa3d52ec231bd5f560ed3e\"" Oct 8 19:48:07.520979 kubelet[2618]: E1008 19:48:07.520918 2618 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://168.119.51.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-d-c7549a9f5e?timeout=10s\": dial tcp 168.119.51.132:6443: connect: connection refused" interval="1.6s" Oct 8 19:48:07.615481 containerd[1594]: time="2024-10-08T19:48:07.615421099Z" level=info msg="StartContainer for \"6f2ee9757760d32f84bf323da98c248bfc6b21d355688122bf9d5411ad053919\" returns successfully" Oct 8 19:48:07.615848 containerd[1594]: time="2024-10-08T19:48:07.615605865Z" level=info msg="StartContainer for \"3fd16a736527300208545e5d4c0cef113b947226b9602fb31d6d6624f7099f59\" returns successfully" Oct 8 19:48:07.615848 containerd[1594]: time="2024-10-08T19:48:07.615650347Z" level=info msg="StartContainer for \"e4a177a6304ef4022a20b3b2916874dd4bd8eabc17aa3d52ec231bd5f560ed3e\" returns successfully" Oct 8 19:48:07.642011 kubelet[2618]: I1008 19:48:07.637394 2618 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:07.642011 kubelet[2618]: E1008 19:48:07.638400 2618 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://168.119.51.132:6443/api/v1/nodes\": dial tcp 168.119.51.132:6443: connect: connection refused" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:07.706721 kubelet[2618]: W1008 19:48:07.705352 2618 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://168.119.51.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:07.706721 kubelet[2618]: E1008 19:48:07.705413 2618 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://168.119.51.132:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 168.119.51.132:6443: connect: connection refused Oct 8 19:48:09.243315 kubelet[2618]: I1008 19:48:09.241859 2618 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:10.162600 kubelet[2618]: E1008 19:48:10.162474 2618 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-2-2-d-c7549a9f5e\" not found" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:10.170354 kubelet[2618]: I1008 19:48:10.169470 2618 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:10.235644 kubelet[2618]: E1008 19:48:10.235476 2618 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975-2-2-d-c7549a9f5e.17fc920017ec4db6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-d-c7549a9f5e,UID:ci-3975-2-2-d-c7549a9f5e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-d-c7549a9f5e,},FirstTimestamp:2024-10-08 19:48:06.102445494 +0000 UTC m=+0.769585396,LastTimestamp:2024-10-08 19:48:06.102445494 +0000 UTC m=+0.769585396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-d-c7549a9f5e,}" Oct 8 19:48:10.299324 kubelet[2618]: E1008 19:48:10.299279 2618 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975-2-2-d-c7549a9f5e.17fc920018f15da9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-d-c7549a9f5e,UID:ci-3975-2-2-d-c7549a9f5e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-d-c7549a9f5e,},FirstTimestamp:2024-10-08 19:48:06.119554473 +0000 UTC m=+0.786694375,LastTimestamp:2024-10-08 19:48:06.119554473 +0000 UTC m=+0.786694375,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-d-c7549a9f5e,}" Oct 8 19:48:10.364327 kubelet[2618]: E1008 19:48:10.362901 2618 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975-2-2-d-c7549a9f5e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:10.369851 kubelet[2618]: E1008 19:48:10.369309 2618 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975-2-2-d-c7549a9f5e.17fc92001b051d8d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-d-c7549a9f5e,UID:ci-3975-2-2-d-c7549a9f5e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-3975-2-2-d-c7549a9f5e status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-d-c7549a9f5e,},FirstTimestamp:2024-10-08 19:48:06.154403213 +0000 UTC m=+0.821543115,LastTimestamp:2024-10-08 19:48:06.154403213 +0000 UTC m=+0.821543115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-d-c7549a9f5e,}" Oct 8 19:48:10.913509 kubelet[2618]: E1008 19:48:10.913457 2618 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:11.108074 kubelet[2618]: I1008 19:48:11.107973 2618 apiserver.go:52] "Watching apiserver" Oct 8 19:48:11.117817 kubelet[2618]: I1008 19:48:11.117663 2618 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:48:13.025345 systemd[1]: Reloading requested from client PID 2893 ('systemctl') (unit session-7.scope)... Oct 8 19:48:13.025369 systemd[1]: Reloading... Oct 8 19:48:13.117353 zram_generator::config[2933]: No configuration found. Oct 8 19:48:13.219993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:48:13.300384 systemd[1]: Reloading finished in 274 ms. Oct 8 19:48:13.338908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:13.355327 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:48:13.355842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:13.372779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:48:13.508564 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:48:13.508792 (kubelet)[2985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:48:13.582753 kubelet[2985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:13.585973 kubelet[2985]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:48:13.585973 kubelet[2985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:48:13.585973 kubelet[2985]: I1008 19:48:13.583656 2985 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:48:13.593770 kubelet[2985]: I1008 19:48:13.593712 2985 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:48:13.593770 kubelet[2985]: I1008 19:48:13.593746 2985 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:48:13.594149 kubelet[2985]: I1008 19:48:13.594117 2985 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:48:13.601428 kubelet[2985]: I1008 19:48:13.601387 2985 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:48:13.604319 kubelet[2985]: I1008 19:48:13.603870 2985 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:48:13.614083 kubelet[2985]: I1008 19:48:13.614049 2985 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:48:13.614678 kubelet[2985]: I1008 19:48:13.614630 2985 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:48:13.614835 kubelet[2985]: I1008 19:48:13.614814 2985 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:48:13.614835 kubelet[2985]: I1008 19:48:13.614837 2985 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:48:13.614943 kubelet[2985]: I1008 19:48:13.614846 2985 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:48:13.614943 kubelet[2985]: I1008 19:48:13.614880 2985 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:13.614993 kubelet[2985]: I1008 19:48:13.614989 2985 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:48:13.615017 kubelet[2985]: I1008 19:48:13.615004 2985 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:48:13.615038 kubelet[2985]: I1008 19:48:13.615024 2985 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:48:13.616703 kubelet[2985]: I1008 19:48:13.616405 2985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:48:13.621818 kubelet[2985]: I1008 19:48:13.618927 2985 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:48:13.621818 kubelet[2985]: I1008 19:48:13.619166 2985 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:48:13.621818 kubelet[2985]: I1008 19:48:13.619593 2985 server.go:1256] "Started kubelet" Oct 8 19:48:13.623594 kubelet[2985]: I1008 19:48:13.623565 2985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:48:13.645504 kubelet[2985]: I1008 19:48:13.642402 2985 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:48:13.645504 kubelet[2985]: I1008 19:48:13.643217 2985 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:48:13.649821 kubelet[2985]: I1008 19:48:13.649790 2985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:48:13.650690 kubelet[2985]: I1008 19:48:13.649999 2985 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:48:13.654275 kubelet[2985]: I1008 19:48:13.653699 2985 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:48:13.655419 kubelet[2985]: I1008 19:48:13.655387 2985 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:48:13.656024 kubelet[2985]: I1008 19:48:13.656000 2985 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:48:13.674156 kubelet[2985]: I1008 19:48:13.674122 2985 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:48:13.679169 kubelet[2985]: I1008 19:48:13.678719 2985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:48:13.680417 kubelet[2985]: I1008 19:48:13.680389 2985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:48:13.680417 kubelet[2985]: I1008 19:48:13.680414 2985 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:48:13.680515 kubelet[2985]: I1008 19:48:13.680439 2985 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:48:13.680515 kubelet[2985]: E1008 19:48:13.680488 2985 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:48:13.681590 kubelet[2985]: I1008 19:48:13.681043 2985 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:48:13.681590 kubelet[2985]: I1008 19:48:13.681058 2985 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:48:13.695514 kubelet[2985]: E1008 19:48:13.695224 2985 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:48:13.757443 kubelet[2985]: I1008 19:48:13.757416 2985 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.769841 kubelet[2985]: I1008 19:48:13.769428 2985 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.770171 kubelet[2985]: I1008 19:48:13.770050 2985 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.772231 kubelet[2985]: I1008 19:48:13.770848 2985 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:48:13.772231 kubelet[2985]: I1008 19:48:13.770873 2985 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:48:13.772231 kubelet[2985]: I1008 19:48:13.770892 2985 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:48:13.772231 kubelet[2985]: I1008 19:48:13.771041 2985 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:48:13.772231 kubelet[2985]: I1008 19:48:13.771060 2985 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:48:13.772231 kubelet[2985]: I1008 19:48:13.771067 2985 policy_none.go:49] "None policy: Start" Oct 8 19:48:13.772470 kubelet[2985]: I1008 19:48:13.772337 2985 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:48:13.772470 kubelet[2985]: I1008 19:48:13.772403 2985 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:48:13.773441 kubelet[2985]: I1008 19:48:13.772669 2985 state_mem.go:75] "Updated machine memory state" Oct 8 19:48:13.779306 kubelet[2985]: I1008 19:48:13.777042 2985 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:48:13.783994 kubelet[2985]: I1008 19:48:13.781609 2985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:48:13.783994 kubelet[2985]: I1008 19:48:13.782657 2985 topology_manager.go:215] "Topology Admit Handler" podUID="623a75cd60f6b902e7e8da60bf63c49a" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.783994 kubelet[2985]: I1008 19:48:13.782904 2985 topology_manager.go:215] "Topology Admit Handler" podUID="d13f7740e414f934df6a8abebb45403e" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.783994 kubelet[2985]: I1008 19:48:13.783153 2985 topology_manager.go:215] "Topology Admit Handler" podUID="9b9dc699018353a28cc6ff3302be5d83" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.957687 kubelet[2985]: I1008 19:48:13.957458 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/623a75cd60f6b902e7e8da60bf63c49a-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-d-c7549a9f5e\" (UID: \"623a75cd60f6b902e7e8da60bf63c49a\") " pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.957687 kubelet[2985]: I1008 19:48:13.957529 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.957687 kubelet[2985]: I1008 19:48:13.957569 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b9dc699018353a28cc6ff3302be5d83-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-d-c7549a9f5e\" (UID: \"9b9dc699018353a28cc6ff3302be5d83\") " pod="kube-system/kube-scheduler-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.957687 kubelet[2985]: I1008 19:48:13.957605 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.957687 kubelet[2985]: I1008 19:48:13.957646 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.958178 kubelet[2985]: I1008 19:48:13.957691 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/623a75cd60f6b902e7e8da60bf63c49a-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-d-c7549a9f5e\" (UID: \"623a75cd60f6b902e7e8da60bf63c49a\") " pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.958178 kubelet[2985]: I1008 19:48:13.957734 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/623a75cd60f6b902e7e8da60bf63c49a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-d-c7549a9f5e\" (UID: \"623a75cd60f6b902e7e8da60bf63c49a\") " pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.958178 kubelet[2985]: I1008 19:48:13.957771 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:13.958178 kubelet[2985]: I1008 19:48:13.957805 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d13f7740e414f934df6a8abebb45403e-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-d-c7549a9f5e\" (UID: \"d13f7740e414f934df6a8abebb45403e\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:14.617633 kubelet[2985]: I1008 19:48:14.617585 2985 apiserver.go:52] "Watching apiserver" Oct 8 19:48:14.656231 kubelet[2985]: I1008 19:48:14.656157 2985 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:48:14.747403 kubelet[2985]: E1008 19:48:14.745810 2985 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-2-2-d-c7549a9f5e\" already exists" pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:14.834492 kubelet[2985]: I1008 19:48:14.834454 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-2-2-d-c7549a9f5e" podStartSLOduration=1.834409027 podStartE2EDuration="1.834409027s" podCreationTimestamp="2024-10-08 19:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:14.808046689 +0000 UTC m=+1.294607912" watchObservedRunningTime="2024-10-08 19:48:14.834409027 +0000 UTC m=+1.320970250" Oct 8 19:48:14.864102 kubelet[2985]: I1008 19:48:14.863448 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-2-2-d-c7549a9f5e" podStartSLOduration=1.863404295 podStartE2EDuration="1.863404295s" podCreationTimestamp="2024-10-08 19:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:14.835036488 +0000 UTC m=+1.321597711" watchObservedRunningTime="2024-10-08 19:48:14.863404295 +0000 UTC m=+1.349965518" Oct 8 19:48:18.512775 sudo[2053]: pam_unix(sudo:session): session closed for user root Oct 8 19:48:18.675606 sshd[2049]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:18.682401 systemd[1]: sshd@6-168.119.51.132:22-139.178.89.65:47470.service: Deactivated successfully. Oct 8 19:48:18.686608 systemd-logind[1566]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:48:18.687606 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:48:18.690379 systemd-logind[1566]: Removed session 7. Oct 8 19:48:19.966717 kubelet[2985]: I1008 19:48:19.966648 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-2-2-d-c7549a9f5e" podStartSLOduration=6.966603768 podStartE2EDuration="6.966603768s" podCreationTimestamp="2024-10-08 19:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:14.864001236 +0000 UTC m=+1.350562459" watchObservedRunningTime="2024-10-08 19:48:19.966603768 +0000 UTC m=+6.453164991" Oct 8 19:48:26.433040 kubelet[2985]: I1008 19:48:26.433005 2985 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:48:26.434299 containerd[1594]: time="2024-10-08T19:48:26.434185963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:48:26.435482 kubelet[2985]: I1008 19:48:26.435445 2985 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:48:27.058281 kubelet[2985]: I1008 19:48:27.055273 2985 topology_manager.go:215] "Topology Admit Handler" podUID="3b40966f-f732-4b1e-8daf-8270564e09c4" podNamespace="kube-system" podName="kube-proxy-lnpcc" Oct 8 19:48:27.137051 kubelet[2985]: I1008 19:48:27.137000 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b40966f-f732-4b1e-8daf-8270564e09c4-kube-proxy\") pod \"kube-proxy-lnpcc\" (UID: \"3b40966f-f732-4b1e-8daf-8270564e09c4\") " pod="kube-system/kube-proxy-lnpcc" Oct 8 19:48:27.137587 kubelet[2985]: I1008 19:48:27.137491 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b40966f-f732-4b1e-8daf-8270564e09c4-lib-modules\") pod \"kube-proxy-lnpcc\" (UID: \"3b40966f-f732-4b1e-8daf-8270564e09c4\") " pod="kube-system/kube-proxy-lnpcc" Oct 8 19:48:27.137875 kubelet[2985]: I1008 19:48:27.137853 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx8cb\" (UniqueName: \"kubernetes.io/projected/3b40966f-f732-4b1e-8daf-8270564e09c4-kube-api-access-cx8cb\") pod \"kube-proxy-lnpcc\" (UID: \"3b40966f-f732-4b1e-8daf-8270564e09c4\") " pod="kube-system/kube-proxy-lnpcc" Oct 8 19:48:27.138134 kubelet[2985]: I1008 19:48:27.138110 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b40966f-f732-4b1e-8daf-8270564e09c4-xtables-lock\") pod \"kube-proxy-lnpcc\" (UID: \"3b40966f-f732-4b1e-8daf-8270564e09c4\") " pod="kube-system/kube-proxy-lnpcc" Oct 8 19:48:27.252017 kubelet[2985]: E1008 19:48:27.251967 2985 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 19:48:27.252017 kubelet[2985]: E1008 19:48:27.252026 2985 projected.go:200] Error preparing data for projected volume kube-api-access-cx8cb for pod kube-system/kube-proxy-lnpcc: configmap "kube-root-ca.crt" not found Oct 8 19:48:27.252266 kubelet[2985]: E1008 19:48:27.252134 2985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b40966f-f732-4b1e-8daf-8270564e09c4-kube-api-access-cx8cb podName:3b40966f-f732-4b1e-8daf-8270564e09c4 nodeName:}" failed. No retries permitted until 2024-10-08 19:48:27.752098764 +0000 UTC m=+14.238660027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cx8cb" (UniqueName: "kubernetes.io/projected/3b40966f-f732-4b1e-8daf-8270564e09c4-kube-api-access-cx8cb") pod "kube-proxy-lnpcc" (UID: "3b40966f-f732-4b1e-8daf-8270564e09c4") : configmap "kube-root-ca.crt" not found Oct 8 19:48:27.570778 kubelet[2985]: I1008 19:48:27.570553 2985 topology_manager.go:215] "Topology Admit Handler" podUID="1f8d38b4-a4da-437c-9168-c2232d82845c" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-psv6s" Oct 8 19:48:27.641134 kubelet[2985]: I1008 19:48:27.641086 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmj7p\" (UniqueName: \"kubernetes.io/projected/1f8d38b4-a4da-437c-9168-c2232d82845c-kube-api-access-zmj7p\") pod \"tigera-operator-5d56685c77-psv6s\" (UID: \"1f8d38b4-a4da-437c-9168-c2232d82845c\") " pod="tigera-operator/tigera-operator-5d56685c77-psv6s" Oct 8 19:48:27.641377 kubelet[2985]: I1008 19:48:27.641187 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f8d38b4-a4da-437c-9168-c2232d82845c-var-lib-calico\") pod \"tigera-operator-5d56685c77-psv6s\" (UID: \"1f8d38b4-a4da-437c-9168-c2232d82845c\") " pod="tigera-operator/tigera-operator-5d56685c77-psv6s" Oct 8 19:48:27.881992 containerd[1594]: time="2024-10-08T19:48:27.881803871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-psv6s,Uid:1f8d38b4-a4da-437c-9168-c2232d82845c,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:48:27.911615 containerd[1594]: time="2024-10-08T19:48:27.911417288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:27.911781 containerd[1594]: time="2024-10-08T19:48:27.911670857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:27.911781 containerd[1594]: time="2024-10-08T19:48:27.911736819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:27.911854 containerd[1594]: time="2024-10-08T19:48:27.911806341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:27.966983 containerd[1594]: time="2024-10-08T19:48:27.966648065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-psv6s,Uid:1f8d38b4-a4da-437c-9168-c2232d82845c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"59c9fb0c88d7ae8b1365d56d78aa737f27c545dc1d085a827a89ac940d66de9f\"" Oct 8 19:48:27.966983 containerd[1594]: time="2024-10-08T19:48:27.966746948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lnpcc,Uid:3b40966f-f732-4b1e-8daf-8270564e09c4,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:27.969046 containerd[1594]: time="2024-10-08T19:48:27.968843380Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:48:27.989688 containerd[1594]: time="2024-10-08T19:48:27.989582092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:27.989688 containerd[1594]: time="2024-10-08T19:48:27.989645375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:27.989688 containerd[1594]: time="2024-10-08T19:48:27.989665215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:27.989917 containerd[1594]: time="2024-10-08T19:48:27.989679416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:28.027602 containerd[1594]: time="2024-10-08T19:48:28.027491675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lnpcc,Uid:3b40966f-f732-4b1e-8daf-8270564e09c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f764f1aaee301561365aa2b5b67e024a3f40cb0d18878ff5b03c98977078870\"" Oct 8 19:48:28.036789 containerd[1594]: time="2024-10-08T19:48:28.035799520Z" level=info msg="CreateContainer within sandbox \"1f764f1aaee301561365aa2b5b67e024a3f40cb0d18878ff5b03c98977078870\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:48:28.050625 containerd[1594]: time="2024-10-08T19:48:28.050577508Z" level=info msg="CreateContainer within sandbox \"1f764f1aaee301561365aa2b5b67e024a3f40cb0d18878ff5b03c98977078870\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cd3992b96574a2562905407e89e8efd13ba0f64cc9d0fc5259badf14a14a71d9\"" Oct 8 19:48:28.052553 containerd[1594]: time="2024-10-08T19:48:28.051222450Z" level=info msg="StartContainer for \"cd3992b96574a2562905407e89e8efd13ba0f64cc9d0fc5259badf14a14a71d9\"" Oct 8 19:48:28.108966 containerd[1594]: time="2024-10-08T19:48:28.108784748Z" level=info msg="StartContainer for \"cd3992b96574a2562905407e89e8efd13ba0f64cc9d0fc5259badf14a14a71d9\" returns successfully" Oct 8 19:48:28.782002 kubelet[2985]: I1008 19:48:28.781934 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-lnpcc" podStartSLOduration=1.781857675 podStartE2EDuration="1.781857675s" podCreationTimestamp="2024-10-08 19:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:28.781769072 +0000 UTC m=+15.268330295" watchObservedRunningTime="2024-10-08 19:48:28.781857675 +0000 UTC m=+15.268418978" Oct 8 19:48:29.754193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount417806623.mount: Deactivated successfully. Oct 8 19:48:30.836151 containerd[1594]: time="2024-10-08T19:48:30.835386197Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:30.836626 containerd[1594]: time="2024-10-08T19:48:30.836600398Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485915" Oct 8 19:48:30.837355 containerd[1594]: time="2024-10-08T19:48:30.837328783Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:30.839709 containerd[1594]: time="2024-10-08T19:48:30.839674984Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:30.840702 containerd[1594]: time="2024-10-08T19:48:30.840665298Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.871779437s" Oct 8 19:48:30.840757 containerd[1594]: time="2024-10-08T19:48:30.840703700Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:48:30.843386 containerd[1594]: time="2024-10-08T19:48:30.843336990Z" level=info msg="CreateContainer within sandbox \"59c9fb0c88d7ae8b1365d56d78aa737f27c545dc1d085a827a89ac940d66de9f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:48:30.866011 containerd[1594]: time="2024-10-08T19:48:30.865955888Z" level=info msg="CreateContainer within sandbox \"59c9fb0c88d7ae8b1365d56d78aa737f27c545dc1d085a827a89ac940d66de9f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4d8e185b9cc34daa832197eb3f5a8d8560f5d6b7b36b810e80fcb743fd377624\"" Oct 8 19:48:30.867100 containerd[1594]: time="2024-10-08T19:48:30.867057726Z" level=info msg="StartContainer for \"4d8e185b9cc34daa832197eb3f5a8d8560f5d6b7b36b810e80fcb743fd377624\"" Oct 8 19:48:30.929237 containerd[1594]: time="2024-10-08T19:48:30.929177742Z" level=info msg="StartContainer for \"4d8e185b9cc34daa832197eb3f5a8d8560f5d6b7b36b810e80fcb743fd377624\" returns successfully" Oct 8 19:48:33.699085 kubelet[2985]: I1008 19:48:33.699020 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-psv6s" podStartSLOduration=3.8260515550000003 podStartE2EDuration="6.698967871s" podCreationTimestamp="2024-10-08 19:48:27 +0000 UTC" firstStartedPulling="2024-10-08 19:48:27.968062433 +0000 UTC m=+14.454623656" lastFinishedPulling="2024-10-08 19:48:30.840978749 +0000 UTC m=+17.327539972" observedRunningTime="2024-10-08 19:48:31.795716235 +0000 UTC m=+18.282277458" watchObservedRunningTime="2024-10-08 19:48:33.698967871 +0000 UTC m=+20.185529054" Oct 8 19:48:35.048406 kubelet[2985]: I1008 19:48:35.045172 2985 topology_manager.go:215] "Topology Admit Handler" podUID="bf9a338b-2902-45e9-9b2c-96b091023d78" podNamespace="calico-system" podName="calico-typha-6fcf98584b-7gcwp" Oct 8 19:48:35.093306 kubelet[2985]: I1008 19:48:35.092264 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf9a338b-2902-45e9-9b2c-96b091023d78-typha-certs\") pod \"calico-typha-6fcf98584b-7gcwp\" (UID: \"bf9a338b-2902-45e9-9b2c-96b091023d78\") " pod="calico-system/calico-typha-6fcf98584b-7gcwp" Oct 8 19:48:35.093306 kubelet[2985]: I1008 19:48:35.092323 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf9a338b-2902-45e9-9b2c-96b091023d78-tigera-ca-bundle\") pod \"calico-typha-6fcf98584b-7gcwp\" (UID: \"bf9a338b-2902-45e9-9b2c-96b091023d78\") " pod="calico-system/calico-typha-6fcf98584b-7gcwp" Oct 8 19:48:35.093306 kubelet[2985]: I1008 19:48:35.092350 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9sw4\" (UniqueName: \"kubernetes.io/projected/bf9a338b-2902-45e9-9b2c-96b091023d78-kube-api-access-l9sw4\") pod \"calico-typha-6fcf98584b-7gcwp\" (UID: \"bf9a338b-2902-45e9-9b2c-96b091023d78\") " pod="calico-system/calico-typha-6fcf98584b-7gcwp" Oct 8 19:48:35.191915 kubelet[2985]: I1008 19:48:35.191723 2985 topology_manager.go:215] "Topology Admit Handler" podUID="096d74ea-e3d7-48f4-b93c-a9b14a5798e9" podNamespace="calico-system" podName="calico-node-wcx9d" Oct 8 19:48:35.294578 kubelet[2985]: I1008 19:48:35.294541 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-lib-modules\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294578 kubelet[2985]: I1008 19:48:35.294584 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-cni-log-dir\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294760 kubelet[2985]: I1008 19:48:35.294611 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5m8k\" (UniqueName: \"kubernetes.io/projected/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-kube-api-access-z5m8k\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294760 kubelet[2985]: I1008 19:48:35.294634 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-xtables-lock\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294760 kubelet[2985]: I1008 19:48:35.294663 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-policysync\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294760 kubelet[2985]: I1008 19:48:35.294684 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-var-lib-calico\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294760 kubelet[2985]: I1008 19:48:35.294705 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-flexvol-driver-host\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294892 kubelet[2985]: I1008 19:48:35.294727 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-tigera-ca-bundle\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294892 kubelet[2985]: I1008 19:48:35.294750 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-var-run-calico\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294892 kubelet[2985]: I1008 19:48:35.294774 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-cni-bin-dir\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294892 kubelet[2985]: I1008 19:48:35.294795 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-node-certs\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.294892 kubelet[2985]: I1008 19:48:35.294817 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/096d74ea-e3d7-48f4-b93c-a9b14a5798e9-cni-net-dir\") pod \"calico-node-wcx9d\" (UID: \"096d74ea-e3d7-48f4-b93c-a9b14a5798e9\") " pod="calico-system/calico-node-wcx9d" Oct 8 19:48:35.320078 kubelet[2985]: I1008 19:48:35.317519 2985 topology_manager.go:215] "Topology Admit Handler" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" podNamespace="calico-system" podName="csi-node-driver-2494q" Oct 8 19:48:35.320078 kubelet[2985]: E1008 19:48:35.317837 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2494q" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" Oct 8 19:48:35.355812 containerd[1594]: time="2024-10-08T19:48:35.355651100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcf98584b-7gcwp,Uid:bf9a338b-2902-45e9-9b2c-96b091023d78,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:35.397143 kubelet[2985]: I1008 19:48:35.395687 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tjbt\" (UniqueName: \"kubernetes.io/projected/7e5da717-f897-4c9e-a583-c20fa5c37108-kube-api-access-8tjbt\") pod \"csi-node-driver-2494q\" (UID: \"7e5da717-f897-4c9e-a583-c20fa5c37108\") " pod="calico-system/csi-node-driver-2494q" Oct 8 19:48:35.397143 kubelet[2985]: I1008 19:48:35.395748 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7e5da717-f897-4c9e-a583-c20fa5c37108-varrun\") pod \"csi-node-driver-2494q\" (UID: \"7e5da717-f897-4c9e-a583-c20fa5c37108\") " pod="calico-system/csi-node-driver-2494q" Oct 8 19:48:35.397143 kubelet[2985]: I1008 19:48:35.395769 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e5da717-f897-4c9e-a583-c20fa5c37108-kubelet-dir\") pod \"csi-node-driver-2494q\" (UID: \"7e5da717-f897-4c9e-a583-c20fa5c37108\") " pod="calico-system/csi-node-driver-2494q" Oct 8 19:48:35.397143 kubelet[2985]: I1008 19:48:35.395824 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7e5da717-f897-4c9e-a583-c20fa5c37108-registration-dir\") pod \"csi-node-driver-2494q\" (UID: \"7e5da717-f897-4c9e-a583-c20fa5c37108\") " pod="calico-system/csi-node-driver-2494q" Oct 8 19:48:35.397143 kubelet[2985]: I1008 19:48:35.395899 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7e5da717-f897-4c9e-a583-c20fa5c37108-socket-dir\") pod \"csi-node-driver-2494q\" (UID: \"7e5da717-f897-4c9e-a583-c20fa5c37108\") " pod="calico-system/csi-node-driver-2494q" Oct 8 19:48:35.402533 containerd[1594]: time="2024-10-08T19:48:35.401625484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:35.402533 containerd[1594]: time="2024-10-08T19:48:35.402182703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:35.402533 containerd[1594]: time="2024-10-08T19:48:35.402274946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:35.402533 containerd[1594]: time="2024-10-08T19:48:35.402410591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:35.402736 kubelet[2985]: E1008 19:48:35.402088 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.402736 kubelet[2985]: W1008 19:48:35.402111 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.402736 kubelet[2985]: E1008 19:48:35.402143 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.402968 kubelet[2985]: E1008 19:48:35.402953 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.410298 kubelet[2985]: W1008 19:48:35.407317 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.410298 kubelet[2985]: E1008 19:48:35.407402 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.410298 kubelet[2985]: E1008 19:48:35.407714 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.410298 kubelet[2985]: W1008 19:48:35.407735 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.410298 kubelet[2985]: E1008 19:48:35.407784 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.410298 kubelet[2985]: E1008 19:48:35.407949 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.410298 kubelet[2985]: W1008 19:48:35.407957 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.410298 kubelet[2985]: E1008 19:48:35.407991 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.410298 kubelet[2985]: E1008 19:48:35.408154 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.410298 kubelet[2985]: W1008 19:48:35.408162 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.410614 kubelet[2985]: E1008 19:48:35.408242 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.410614 kubelet[2985]: E1008 19:48:35.408392 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.410614 kubelet[2985]: W1008 19:48:35.408400 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.410614 kubelet[2985]: E1008 19:48:35.408418 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.410614 kubelet[2985]: E1008 19:48:35.408620 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.410614 kubelet[2985]: W1008 19:48:35.408628 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.410614 kubelet[2985]: E1008 19:48:35.408640 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.410614 kubelet[2985]: E1008 19:48:35.408812 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.410614 kubelet[2985]: W1008 19:48:35.408820 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.410614 kubelet[2985]: E1008 19:48:35.408833 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419228 kubelet[2985]: E1008 19:48:35.408967 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419228 kubelet[2985]: W1008 19:48:35.408974 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419228 kubelet[2985]: E1008 19:48:35.408984 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419228 kubelet[2985]: E1008 19:48:35.409131 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419228 kubelet[2985]: W1008 19:48:35.409150 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419228 kubelet[2985]: E1008 19:48:35.409161 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419228 kubelet[2985]: E1008 19:48:35.409318 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419228 kubelet[2985]: W1008 19:48:35.409325 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419228 kubelet[2985]: E1008 19:48:35.409335 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419228 kubelet[2985]: E1008 19:48:35.409469 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419542 kubelet[2985]: W1008 19:48:35.409478 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419542 kubelet[2985]: E1008 19:48:35.409489 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419542 kubelet[2985]: E1008 19:48:35.409607 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419542 kubelet[2985]: W1008 19:48:35.409613 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419542 kubelet[2985]: E1008 19:48:35.409623 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419542 kubelet[2985]: E1008 19:48:35.409807 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419542 kubelet[2985]: W1008 19:48:35.409816 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419542 kubelet[2985]: E1008 19:48:35.409827 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419542 kubelet[2985]: E1008 19:48:35.409980 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419542 kubelet[2985]: W1008 19:48:35.409987 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419746 kubelet[2985]: E1008 19:48:35.410005 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419746 kubelet[2985]: E1008 19:48:35.410151 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419746 kubelet[2985]: W1008 19:48:35.410160 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419746 kubelet[2985]: E1008 19:48:35.410170 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.419746 kubelet[2985]: E1008 19:48:35.410380 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.419746 kubelet[2985]: W1008 19:48:35.410390 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.419746 kubelet[2985]: E1008 19:48:35.410401 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.439691 kubelet[2985]: E1008 19:48:35.438741 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.439691 kubelet[2985]: W1008 19:48:35.439205 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.439691 kubelet[2985]: E1008 19:48:35.439240 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.478947 containerd[1594]: time="2024-10-08T19:48:35.478277245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fcf98584b-7gcwp,Uid:bf9a338b-2902-45e9-9b2c-96b091023d78,Namespace:calico-system,Attempt:0,} returns sandbox id \"074ff4063bb2e7d937084dfc1edaa7041ec8f37f11f063392b0f4f95f691a57c\"" Oct 8 19:48:35.479978 containerd[1594]: time="2024-10-08T19:48:35.479922862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:48:35.498642 kubelet[2985]: E1008 19:48:35.498522 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.499464 kubelet[2985]: W1008 19:48:35.498635 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.499464 kubelet[2985]: E1008 19:48:35.498718 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.499464 kubelet[2985]: E1008 19:48:35.499395 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.499464 kubelet[2985]: W1008 19:48:35.499414 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.499593 kubelet[2985]: E1008 19:48:35.499481 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.500703 kubelet[2985]: E1008 19:48:35.500017 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.500703 kubelet[2985]: W1008 19:48:35.500073 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.500703 kubelet[2985]: E1008 19:48:35.500114 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.500703 kubelet[2985]: E1008 19:48:35.500632 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.500703 kubelet[2985]: W1008 19:48:35.500681 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.500703 kubelet[2985]: E1008 19:48:35.500713 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.501993 kubelet[2985]: E1008 19:48:35.501240 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.501993 kubelet[2985]: W1008 19:48:35.501275 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.501993 kubelet[2985]: E1008 19:48:35.501358 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.502921 kubelet[2985]: E1008 19:48:35.502339 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.502921 kubelet[2985]: W1008 19:48:35.502357 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.502921 kubelet[2985]: E1008 19:48:35.502466 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.502921 kubelet[2985]: E1008 19:48:35.502799 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.502921 kubelet[2985]: W1008 19:48:35.502810 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.503149 kubelet[2985]: E1008 19:48:35.503049 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.503674 kubelet[2985]: E1008 19:48:35.503652 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.503674 kubelet[2985]: W1008 19:48:35.504332 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.503674 kubelet[2985]: E1008 19:48:35.504414 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.506473 kubelet[2985]: E1008 19:48:35.506406 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.506473 kubelet[2985]: W1008 19:48:35.506423 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.508188 kubelet[2985]: E1008 19:48:35.506624 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.508188 kubelet[2985]: E1008 19:48:35.507098 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.508378 kubelet[2985]: W1008 19:48:35.508317 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.508429 kubelet[2985]: E1008 19:48:35.508410 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.509137 containerd[1594]: time="2024-10-08T19:48:35.508729455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wcx9d,Uid:096d74ea-e3d7-48f4-b93c-a9b14a5798e9,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:35.510717 kubelet[2985]: E1008 19:48:35.510599 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.510717 kubelet[2985]: W1008 19:48:35.510618 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.510825 kubelet[2985]: E1008 19:48:35.510750 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.510932 kubelet[2985]: E1008 19:48:35.510919 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.511071 kubelet[2985]: W1008 19:48:35.510964 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.511102 kubelet[2985]: E1008 19:48:35.511083 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.513269 kubelet[2985]: E1008 19:48:35.512213 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.513269 kubelet[2985]: W1008 19:48:35.512674 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.513269 kubelet[2985]: E1008 19:48:35.513032 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.513544 kubelet[2985]: E1008 19:48:35.513363 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.513544 kubelet[2985]: W1008 19:48:35.513379 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.514438 kubelet[2985]: E1008 19:48:35.514414 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.514784 kubelet[2985]: E1008 19:48:35.514761 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.514784 kubelet[2985]: W1008 19:48:35.514771 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.514855 kubelet[2985]: E1008 19:48:35.514839 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.515365 kubelet[2985]: E1008 19:48:35.515053 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.515365 kubelet[2985]: W1008 19:48:35.515065 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.515365 kubelet[2985]: E1008 19:48:35.515151 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.515365 kubelet[2985]: E1008 19:48:35.515329 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.515365 kubelet[2985]: W1008 19:48:35.515338 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.516969 kubelet[2985]: E1008 19:48:35.515970 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.516969 kubelet[2985]: W1008 19:48:35.515981 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.516969 kubelet[2985]: E1008 19:48:35.516126 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.516969 kubelet[2985]: E1008 19:48:35.516146 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.516969 kubelet[2985]: E1008 19:48:35.516662 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.516969 kubelet[2985]: W1008 19:48:35.516673 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.516969 kubelet[2985]: E1008 19:48:35.516810 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.517944 kubelet[2985]: E1008 19:48:35.517408 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.517944 kubelet[2985]: W1008 19:48:35.517419 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.517944 kubelet[2985]: E1008 19:48:35.517543 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.519353 kubelet[2985]: E1008 19:48:35.518155 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.519353 kubelet[2985]: W1008 19:48:35.518174 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.519353 kubelet[2985]: E1008 19:48:35.518189 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.519353 kubelet[2985]: E1008 19:48:35.518838 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.519353 kubelet[2985]: W1008 19:48:35.518854 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.519353 kubelet[2985]: E1008 19:48:35.518877 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.521118 kubelet[2985]: E1008 19:48:35.519666 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.521118 kubelet[2985]: W1008 19:48:35.519678 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.521118 kubelet[2985]: E1008 19:48:35.519952 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.521118 kubelet[2985]: E1008 19:48:35.520311 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.521118 kubelet[2985]: W1008 19:48:35.520323 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.521118 kubelet[2985]: E1008 19:48:35.520336 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.521118 kubelet[2985]: E1008 19:48:35.520896 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.521118 kubelet[2985]: W1008 19:48:35.520906 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.521118 kubelet[2985]: E1008 19:48:35.520940 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.534312 kubelet[2985]: E1008 19:48:35.534149 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:35.534312 kubelet[2985]: W1008 19:48:35.534171 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:35.534312 kubelet[2985]: E1008 19:48:35.534192 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:35.548849 containerd[1594]: time="2024-10-08T19:48:35.548609309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:35.549219 containerd[1594]: time="2024-10-08T19:48:35.549058604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:35.549843 containerd[1594]: time="2024-10-08T19:48:35.549568742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:35.549843 containerd[1594]: time="2024-10-08T19:48:35.549598543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:35.598878 containerd[1594]: time="2024-10-08T19:48:35.598087894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wcx9d,Uid:096d74ea-e3d7-48f4-b93c-a9b14a5798e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ab53cebd4122ee878b6287e71b9dac7a72df121856786b763412ccc4b319917\"" Oct 8 19:48:36.681108 kubelet[2985]: E1008 19:48:36.681059 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2494q" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" Oct 8 19:48:38.313058 containerd[1594]: time="2024-10-08T19:48:38.313009538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:38.314102 containerd[1594]: time="2024-10-08T19:48:38.314059454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:48:38.314992 containerd[1594]: time="2024-10-08T19:48:38.314941124Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:38.317081 containerd[1594]: time="2024-10-08T19:48:38.316977514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:38.320650 containerd[1594]: time="2024-10-08T19:48:38.320510676Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.840546173s" Oct 8 19:48:38.320650 containerd[1594]: time="2024-10-08T19:48:38.320555478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:48:38.322631 containerd[1594]: time="2024-10-08T19:48:38.322162133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:48:38.338147 containerd[1594]: time="2024-10-08T19:48:38.337976479Z" level=info msg="CreateContainer within sandbox \"074ff4063bb2e7d937084dfc1edaa7041ec8f37f11f063392b0f4f95f691a57c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:48:38.362809 containerd[1594]: time="2024-10-08T19:48:38.362718132Z" level=info msg="CreateContainer within sandbox \"074ff4063bb2e7d937084dfc1edaa7041ec8f37f11f063392b0f4f95f691a57c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1dd7dee77dccd9cd7076b586b6b8087924cb71af90a11026c98ccdd96d03eaf0\"" Oct 8 19:48:38.364119 containerd[1594]: time="2024-10-08T19:48:38.363700086Z" level=info msg="StartContainer for \"1dd7dee77dccd9cd7076b586b6b8087924cb71af90a11026c98ccdd96d03eaf0\"" Oct 8 19:48:38.450645 containerd[1594]: time="2024-10-08T19:48:38.450540002Z" level=info msg="StartContainer for \"1dd7dee77dccd9cd7076b586b6b8087924cb71af90a11026c98ccdd96d03eaf0\" returns successfully" Oct 8 19:48:38.682110 kubelet[2985]: E1008 19:48:38.681758 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2494q" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" Oct 8 19:48:38.818493 kubelet[2985]: E1008 19:48:38.818458 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.818493 kubelet[2985]: W1008 19:48:38.818483 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.818493 kubelet[2985]: E1008 19:48:38.818505 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.819185 kubelet[2985]: E1008 19:48:38.819166 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.819185 kubelet[2985]: W1008 19:48:38.819181 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.819185 kubelet[2985]: E1008 19:48:38.819197 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.819630 kubelet[2985]: E1008 19:48:38.819613 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.819630 kubelet[2985]: W1008 19:48:38.819627 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.819737 kubelet[2985]: E1008 19:48:38.819640 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.819967 kubelet[2985]: E1008 19:48:38.819952 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.819967 kubelet[2985]: W1008 19:48:38.819964 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.820042 kubelet[2985]: E1008 19:48:38.819976 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.820346 kubelet[2985]: E1008 19:48:38.820328 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.820346 kubelet[2985]: W1008 19:48:38.820340 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.820346 kubelet[2985]: E1008 19:48:38.820351 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.821020 kubelet[2985]: E1008 19:48:38.820659 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.821020 kubelet[2985]: W1008 19:48:38.820668 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.821020 kubelet[2985]: E1008 19:48:38.820680 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.821591 kubelet[2985]: E1008 19:48:38.821459 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.821591 kubelet[2985]: W1008 19:48:38.821476 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.821591 kubelet[2985]: E1008 19:48:38.821493 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.821827 kubelet[2985]: E1008 19:48:38.821713 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.821827 kubelet[2985]: W1008 19:48:38.821723 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.821827 kubelet[2985]: E1008 19:48:38.821735 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.823303 kubelet[2985]: E1008 19:48:38.822998 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.823303 kubelet[2985]: W1008 19:48:38.823013 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.823303 kubelet[2985]: E1008 19:48:38.823039 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.824097 kubelet[2985]: E1008 19:48:38.824077 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.824097 kubelet[2985]: W1008 19:48:38.824094 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.824155 kubelet[2985]: E1008 19:48:38.824110 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.825617 kubelet[2985]: E1008 19:48:38.825587 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.825966 kubelet[2985]: W1008 19:48:38.825610 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.826008 kubelet[2985]: E1008 19:48:38.825974 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.826333 kubelet[2985]: E1008 19:48:38.826280 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.826333 kubelet[2985]: W1008 19:48:38.826330 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.826445 kubelet[2985]: E1008 19:48:38.826347 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.826563 kubelet[2985]: E1008 19:48:38.826543 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.826563 kubelet[2985]: W1008 19:48:38.826558 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.826643 kubelet[2985]: E1008 19:48:38.826573 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.826796 kubelet[2985]: E1008 19:48:38.826777 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.826796 kubelet[2985]: W1008 19:48:38.826792 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.826852 kubelet[2985]: E1008 19:48:38.826804 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.827001 kubelet[2985]: E1008 19:48:38.826987 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.827001 kubelet[2985]: W1008 19:48:38.826999 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.827064 kubelet[2985]: E1008 19:48:38.827010 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.830070 kubelet[2985]: I1008 19:48:38.830019 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6fcf98584b-7gcwp" podStartSLOduration=0.98832668 podStartE2EDuration="3.829971771s" podCreationTimestamp="2024-10-08 19:48:35 +0000 UTC" firstStartedPulling="2024-10-08 19:48:35.479509448 +0000 UTC m=+21.966070671" lastFinishedPulling="2024-10-08 19:48:38.321154579 +0000 UTC m=+24.807715762" observedRunningTime="2024-10-08 19:48:38.829161543 +0000 UTC m=+25.315722766" watchObservedRunningTime="2024-10-08 19:48:38.829971771 +0000 UTC m=+25.316532994" Oct 8 19:48:38.839109 kubelet[2985]: E1008 19:48:38.838925 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.839109 kubelet[2985]: W1008 19:48:38.838992 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.839109 kubelet[2985]: E1008 19:48:38.839016 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.839555 kubelet[2985]: E1008 19:48:38.839332 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.839555 kubelet[2985]: W1008 19:48:38.839345 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.839555 kubelet[2985]: E1008 19:48:38.839376 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.839555 kubelet[2985]: E1008 19:48:38.839547 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.839555 kubelet[2985]: W1008 19:48:38.839554 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.839801 kubelet[2985]: E1008 19:48:38.839569 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.840140 kubelet[2985]: E1008 19:48:38.840122 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.840140 kubelet[2985]: W1008 19:48:38.840138 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.840242 kubelet[2985]: E1008 19:48:38.840156 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.840601 kubelet[2985]: E1008 19:48:38.840583 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.840601 kubelet[2985]: W1008 19:48:38.840599 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.841007 kubelet[2985]: E1008 19:48:38.840666 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.841007 kubelet[2985]: E1008 19:48:38.840791 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.841007 kubelet[2985]: W1008 19:48:38.840800 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.841007 kubelet[2985]: E1008 19:48:38.840811 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.841418 kubelet[2985]: E1008 19:48:38.841402 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.841495 kubelet[2985]: W1008 19:48:38.841482 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.841585 kubelet[2985]: E1008 19:48:38.841575 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.842150 kubelet[2985]: E1008 19:48:38.842032 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.842150 kubelet[2985]: W1008 19:48:38.842054 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.842150 kubelet[2985]: E1008 19:48:38.842078 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.842744 kubelet[2985]: E1008 19:48:38.842347 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.842744 kubelet[2985]: W1008 19:48:38.842361 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.842744 kubelet[2985]: E1008 19:48:38.842379 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.842744 kubelet[2985]: E1008 19:48:38.842541 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.842744 kubelet[2985]: W1008 19:48:38.842549 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.842744 kubelet[2985]: E1008 19:48:38.842559 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.842744 kubelet[2985]: E1008 19:48:38.842679 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.842744 kubelet[2985]: W1008 19:48:38.842692 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.842744 kubelet[2985]: E1008 19:48:38.842704 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.843886 kubelet[2985]: E1008 19:48:38.842864 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.843886 kubelet[2985]: W1008 19:48:38.842872 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.843886 kubelet[2985]: E1008 19:48:38.842949 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.843886 kubelet[2985]: E1008 19:48:38.843610 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.843886 kubelet[2985]: W1008 19:48:38.843624 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.843886 kubelet[2985]: E1008 19:48:38.843642 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.843886 kubelet[2985]: E1008 19:48:38.843805 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.843886 kubelet[2985]: W1008 19:48:38.843812 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.843886 kubelet[2985]: E1008 19:48:38.843822 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.844058 kubelet[2985]: E1008 19:48:38.843936 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.844058 kubelet[2985]: W1008 19:48:38.843943 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.844058 kubelet[2985]: E1008 19:48:38.843952 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.845003 kubelet[2985]: E1008 19:48:38.844117 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.845003 kubelet[2985]: W1008 19:48:38.844130 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.845003 kubelet[2985]: E1008 19:48:38.844145 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.845003 kubelet[2985]: E1008 19:48:38.844678 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.845003 kubelet[2985]: W1008 19:48:38.844689 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.845003 kubelet[2985]: E1008 19:48:38.844702 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:38.845003 kubelet[2985]: E1008 19:48:38.844910 2985 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:48:38.845003 kubelet[2985]: W1008 19:48:38.844918 2985 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:48:38.845003 kubelet[2985]: E1008 19:48:38.844932 2985 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:48:39.706939 containerd[1594]: time="2024-10-08T19:48:39.706065720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:39.707414 containerd[1594]: time="2024-10-08T19:48:39.707384606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:48:39.707893 containerd[1594]: time="2024-10-08T19:48:39.707865022Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:39.711343 containerd[1594]: time="2024-10-08T19:48:39.711256779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:39.712104 containerd[1594]: time="2024-10-08T19:48:39.712058847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.389838472s" Oct 8 19:48:39.712104 containerd[1594]: time="2024-10-08T19:48:39.712097888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:48:39.715068 containerd[1594]: time="2024-10-08T19:48:39.715012789Z" level=info msg="CreateContainer within sandbox \"7ab53cebd4122ee878b6287e71b9dac7a72df121856786b763412ccc4b319917\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:48:39.737316 containerd[1594]: time="2024-10-08T19:48:39.735501176Z" level=info msg="CreateContainer within sandbox \"7ab53cebd4122ee878b6287e71b9dac7a72df121856786b763412ccc4b319917\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ef636081cd630c0dc089b5b1ce1178d336d41cd7ce950d57c4ab532388b2ec94\"" Oct 8 19:48:39.737316 containerd[1594]: time="2024-10-08T19:48:39.736398047Z" level=info msg="StartContainer for \"ef636081cd630c0dc089b5b1ce1178d336d41cd7ce950d57c4ab532388b2ec94\"" Oct 8 19:48:39.806647 containerd[1594]: time="2024-10-08T19:48:39.806583589Z" level=info msg="StartContainer for \"ef636081cd630c0dc089b5b1ce1178d336d41cd7ce950d57c4ab532388b2ec94\" returns successfully" Oct 8 19:48:39.868997 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef636081cd630c0dc089b5b1ce1178d336d41cd7ce950d57c4ab532388b2ec94-rootfs.mount: Deactivated successfully. Oct 8 19:48:39.975102 containerd[1594]: time="2024-10-08T19:48:39.974980880Z" level=info msg="shim disconnected" id=ef636081cd630c0dc089b5b1ce1178d336d41cd7ce950d57c4ab532388b2ec94 namespace=k8s.io Oct 8 19:48:39.975102 containerd[1594]: time="2024-10-08T19:48:39.975086883Z" level=warning msg="cleaning up after shim disconnected" id=ef636081cd630c0dc089b5b1ce1178d336d41cd7ce950d57c4ab532388b2ec94 namespace=k8s.io Oct 8 19:48:39.975513 containerd[1594]: time="2024-10-08T19:48:39.975127245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:40.681301 kubelet[2985]: E1008 19:48:40.681251 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2494q" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" Oct 8 19:48:40.828580 containerd[1594]: time="2024-10-08T19:48:40.828532703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:48:42.682930 kubelet[2985]: E1008 19:48:42.681207 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2494q" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" Oct 8 19:48:44.006727 containerd[1594]: time="2024-10-08T19:48:44.006663269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:44.008244 containerd[1594]: time="2024-10-08T19:48:44.007955353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:48:44.009138 containerd[1594]: time="2024-10-08T19:48:44.009063432Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:44.012722 containerd[1594]: time="2024-10-08T19:48:44.012272302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:44.013245 containerd[1594]: time="2024-10-08T19:48:44.013208655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.18463503s" Oct 8 19:48:44.013327 containerd[1594]: time="2024-10-08T19:48:44.013244216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:48:44.016260 containerd[1594]: time="2024-10-08T19:48:44.016200318Z" level=info msg="CreateContainer within sandbox \"7ab53cebd4122ee878b6287e71b9dac7a72df121856786b763412ccc4b319917\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:48:44.038304 containerd[1594]: time="2024-10-08T19:48:44.036886393Z" level=info msg="CreateContainer within sandbox \"7ab53cebd4122ee878b6287e71b9dac7a72df121856786b763412ccc4b319917\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a6dd1cfca1774a4e1623f694a0aefd743ead3d9d5c41ff6ff481c036d24374a\"" Oct 8 19:48:44.039247 containerd[1594]: time="2024-10-08T19:48:44.039208033Z" level=info msg="StartContainer for \"7a6dd1cfca1774a4e1623f694a0aefd743ead3d9d5c41ff6ff481c036d24374a\"" Oct 8 19:48:44.075092 systemd[1]: run-containerd-runc-k8s.io-7a6dd1cfca1774a4e1623f694a0aefd743ead3d9d5c41ff6ff481c036d24374a-runc.R4hcpM.mount: Deactivated successfully. Oct 8 19:48:44.107976 containerd[1594]: time="2024-10-08T19:48:44.107838645Z" level=info msg="StartContainer for \"7a6dd1cfca1774a4e1623f694a0aefd743ead3d9d5c41ff6ff481c036d24374a\" returns successfully" Oct 8 19:48:44.551153 containerd[1594]: time="2024-10-08T19:48:44.551089723Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:48:44.566394 kubelet[2985]: I1008 19:48:44.565117 2985 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:48:44.581990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a6dd1cfca1774a4e1623f694a0aefd743ead3d9d5c41ff6ff481c036d24374a-rootfs.mount: Deactivated successfully. Oct 8 19:48:44.597319 kubelet[2985]: I1008 19:48:44.596845 2985 topology_manager.go:215] "Topology Admit Handler" podUID="21948a20-d7e6-467b-a59a-37617ee3726e" podNamespace="kube-system" podName="coredns-76f75df574-s4sm5" Oct 8 19:48:44.606511 kubelet[2985]: I1008 19:48:44.604195 2985 topology_manager.go:215] "Topology Admit Handler" podUID="e15cac1e-985b-429c-869f-52ca0b633720" podNamespace="kube-system" podName="coredns-76f75df574-vwmgr" Oct 8 19:48:44.615736 kubelet[2985]: I1008 19:48:44.615702 2985 topology_manager.go:215] "Topology Admit Handler" podUID="bca11565-40aa-4871-8cd0-05721317a01c" podNamespace="calico-system" podName="calico-kube-controllers-75d69567cd-r5pmq" Oct 8 19:48:44.690539 kubelet[2985]: I1008 19:48:44.690497 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2rd7\" (UniqueName: \"kubernetes.io/projected/21948a20-d7e6-467b-a59a-37617ee3726e-kube-api-access-l2rd7\") pod \"coredns-76f75df574-s4sm5\" (UID: \"21948a20-d7e6-467b-a59a-37617ee3726e\") " pod="kube-system/coredns-76f75df574-s4sm5" Oct 8 19:48:44.690539 kubelet[2985]: I1008 19:48:44.690563 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7q5k\" (UniqueName: \"kubernetes.io/projected/e15cac1e-985b-429c-869f-52ca0b633720-kube-api-access-j7q5k\") pod \"coredns-76f75df574-vwmgr\" (UID: \"e15cac1e-985b-429c-869f-52ca0b633720\") " pod="kube-system/coredns-76f75df574-vwmgr" Oct 8 19:48:44.696578 kubelet[2985]: I1008 19:48:44.690595 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e15cac1e-985b-429c-869f-52ca0b633720-config-volume\") pod \"coredns-76f75df574-vwmgr\" (UID: \"e15cac1e-985b-429c-869f-52ca0b633720\") " pod="kube-system/coredns-76f75df574-vwmgr" Oct 8 19:48:44.696578 kubelet[2985]: I1008 19:48:44.690627 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26hsn\" (UniqueName: \"kubernetes.io/projected/bca11565-40aa-4871-8cd0-05721317a01c-kube-api-access-26hsn\") pod \"calico-kube-controllers-75d69567cd-r5pmq\" (UID: \"bca11565-40aa-4871-8cd0-05721317a01c\") " pod="calico-system/calico-kube-controllers-75d69567cd-r5pmq" Oct 8 19:48:44.696578 kubelet[2985]: I1008 19:48:44.690667 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bca11565-40aa-4871-8cd0-05721317a01c-tigera-ca-bundle\") pod \"calico-kube-controllers-75d69567cd-r5pmq\" (UID: \"bca11565-40aa-4871-8cd0-05721317a01c\") " pod="calico-system/calico-kube-controllers-75d69567cd-r5pmq" Oct 8 19:48:44.696578 kubelet[2985]: I1008 19:48:44.690908 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21948a20-d7e6-467b-a59a-37617ee3726e-config-volume\") pod \"coredns-76f75df574-s4sm5\" (UID: \"21948a20-d7e6-467b-a59a-37617ee3726e\") " pod="kube-system/coredns-76f75df574-s4sm5" Oct 8 19:48:44.697417 containerd[1594]: time="2024-10-08T19:48:44.696970365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2494q,Uid:7e5da717-f897-4c9e-a583-c20fa5c37108,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:44.704195 containerd[1594]: time="2024-10-08T19:48:44.704112252Z" level=info msg="shim disconnected" id=7a6dd1cfca1774a4e1623f694a0aefd743ead3d9d5c41ff6ff481c036d24374a namespace=k8s.io Oct 8 19:48:44.704392 containerd[1594]: time="2024-10-08T19:48:44.704372941Z" level=warning msg="cleaning up after shim disconnected" id=7a6dd1cfca1774a4e1623f694a0aefd743ead3d9d5c41ff6ff481c036d24374a namespace=k8s.io Oct 8 19:48:44.704467 containerd[1594]: time="2024-10-08T19:48:44.704453983Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:48:44.842092 containerd[1594]: time="2024-10-08T19:48:44.841962895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:48:44.852322 containerd[1594]: time="2024-10-08T19:48:44.851607469Z" level=error msg="Failed to destroy network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:44.852605 containerd[1594]: time="2024-10-08T19:48:44.852554461Z" level=error msg="encountered an error cleaning up failed sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:44.852883 containerd[1594]: time="2024-10-08T19:48:44.852631584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2494q,Uid:7e5da717-f897-4c9e-a583-c20fa5c37108,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:44.853386 kubelet[2985]: E1008 19:48:44.852950 2985 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:44.853386 kubelet[2985]: E1008 19:48:44.853062 2985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2494q" Oct 8 19:48:44.853386 kubelet[2985]: E1008 19:48:44.853092 2985 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2494q" Oct 8 19:48:44.854393 kubelet[2985]: E1008 19:48:44.853145 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2494q_calico-system(7e5da717-f897-4c9e-a583-c20fa5c37108)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2494q_calico-system(7e5da717-f897-4c9e-a583-c20fa5c37108)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2494q" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" Oct 8 19:48:44.917384 containerd[1594]: time="2024-10-08T19:48:44.916900845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s4sm5,Uid:21948a20-d7e6-467b-a59a-37617ee3726e,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:44.922533 containerd[1594]: time="2024-10-08T19:48:44.921993261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vwmgr,Uid:e15cac1e-985b-429c-869f-52ca0b633720,Namespace:kube-system,Attempt:0,}" Oct 8 19:48:44.923755 containerd[1594]: time="2024-10-08T19:48:44.923677279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d69567cd-r5pmq,Uid:bca11565-40aa-4871-8cd0-05721317a01c,Namespace:calico-system,Attempt:0,}" Oct 8 19:48:45.047668 containerd[1594]: time="2024-10-08T19:48:45.047535240Z" level=error msg="Failed to destroy network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.053211 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591-shm.mount: Deactivated successfully. Oct 8 19:48:45.053911 containerd[1594]: time="2024-10-08T19:48:45.053656252Z" level=error msg="encountered an error cleaning up failed sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.054269 containerd[1594]: time="2024-10-08T19:48:45.054192950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s4sm5,Uid:21948a20-d7e6-467b-a59a-37617ee3726e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.055568 kubelet[2985]: E1008 19:48:45.055464 2985 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.055568 kubelet[2985]: E1008 19:48:45.055517 2985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s4sm5" Oct 8 19:48:45.055568 kubelet[2985]: E1008 19:48:45.055538 2985 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s4sm5" Oct 8 19:48:45.056616 kubelet[2985]: E1008 19:48:45.055596 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-s4sm5_kube-system(21948a20-d7e6-467b-a59a-37617ee3726e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-s4sm5_kube-system(21948a20-d7e6-467b-a59a-37617ee3726e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s4sm5" podUID="21948a20-d7e6-467b-a59a-37617ee3726e" Oct 8 19:48:45.062297 containerd[1594]: time="2024-10-08T19:48:45.060149796Z" level=error msg="Failed to destroy network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.062297 containerd[1594]: time="2024-10-08T19:48:45.060486888Z" level=error msg="encountered an error cleaning up failed sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.062297 containerd[1594]: time="2024-10-08T19:48:45.060530489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d69567cd-r5pmq,Uid:bca11565-40aa-4871-8cd0-05721317a01c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.064875 kubelet[2985]: E1008 19:48:45.062966 2985 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.064875 kubelet[2985]: E1008 19:48:45.063016 2985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d69567cd-r5pmq" Oct 8 19:48:45.064875 kubelet[2985]: E1008 19:48:45.063037 2985 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d69567cd-r5pmq" Oct 8 19:48:45.063147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440-shm.mount: Deactivated successfully. Oct 8 19:48:45.065105 kubelet[2985]: E1008 19:48:45.063082 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75d69567cd-r5pmq_calico-system(bca11565-40aa-4871-8cd0-05721317a01c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75d69567cd-r5pmq_calico-system(bca11565-40aa-4871-8cd0-05721317a01c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75d69567cd-r5pmq" podUID="bca11565-40aa-4871-8cd0-05721317a01c" Oct 8 19:48:45.069328 containerd[1594]: time="2024-10-08T19:48:45.069261511Z" level=error msg="Failed to destroy network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.071660 containerd[1594]: time="2024-10-08T19:48:45.071614913Z" level=error msg="encountered an error cleaning up failed sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.071822 containerd[1594]: time="2024-10-08T19:48:45.071797839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vwmgr,Uid:e15cac1e-985b-429c-869f-52ca0b633720,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.072053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0-shm.mount: Deactivated successfully. Oct 8 19:48:45.072920 kubelet[2985]: E1008 19:48:45.072082 2985 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.072920 kubelet[2985]: E1008 19:48:45.072131 2985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vwmgr" Oct 8 19:48:45.072920 kubelet[2985]: E1008 19:48:45.072157 2985 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-vwmgr" Oct 8 19:48:45.073036 kubelet[2985]: E1008 19:48:45.072212 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-vwmgr_kube-system(e15cac1e-985b-429c-869f-52ca0b633720)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-vwmgr_kube-system(e15cac1e-985b-429c-869f-52ca0b633720)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vwmgr" podUID="e15cac1e-985b-429c-869f-52ca0b633720" Oct 8 19:48:45.841656 kubelet[2985]: I1008 19:48:45.841587 2985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:48:45.844533 containerd[1594]: time="2024-10-08T19:48:45.843833767Z" level=info msg="StopPodSandbox for \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\"" Oct 8 19:48:45.844533 containerd[1594]: time="2024-10-08T19:48:45.844204900Z" level=info msg="Ensure that sandbox f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440 in task-service has been cleanup successfully" Oct 8 19:48:45.847011 kubelet[2985]: I1008 19:48:45.846982 2985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:48:45.851608 containerd[1594]: time="2024-10-08T19:48:45.849579445Z" level=info msg="StopPodSandbox for \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\"" Oct 8 19:48:45.852577 containerd[1594]: time="2024-10-08T19:48:45.851920326Z" level=info msg="Ensure that sandbox 0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591 in task-service has been cleanup successfully" Oct 8 19:48:45.855211 kubelet[2985]: I1008 19:48:45.855136 2985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:48:45.856847 containerd[1594]: time="2024-10-08T19:48:45.856781414Z" level=info msg="StopPodSandbox for \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\"" Oct 8 19:48:45.857084 containerd[1594]: time="2024-10-08T19:48:45.857055264Z" level=info msg="Ensure that sandbox 26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09 in task-service has been cleanup successfully" Oct 8 19:48:45.859173 kubelet[2985]: I1008 19:48:45.859142 2985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:48:45.863423 containerd[1594]: time="2024-10-08T19:48:45.860964479Z" level=info msg="StopPodSandbox for \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\"" Oct 8 19:48:45.864714 containerd[1594]: time="2024-10-08T19:48:45.864613245Z" level=info msg="Ensure that sandbox e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0 in task-service has been cleanup successfully" Oct 8 19:48:45.921601 containerd[1594]: time="2024-10-08T19:48:45.921477051Z" level=error msg="StopPodSandbox for \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\" failed" error="failed to destroy network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.922607 kubelet[2985]: E1008 19:48:45.922529 2985 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:48:45.922719 kubelet[2985]: E1008 19:48:45.922627 2985 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591"} Oct 8 19:48:45.922719 kubelet[2985]: E1008 19:48:45.922675 2985 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21948a20-d7e6-467b-a59a-37617ee3726e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:48:45.922719 kubelet[2985]: E1008 19:48:45.922705 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21948a20-d7e6-467b-a59a-37617ee3726e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s4sm5" podUID="21948a20-d7e6-467b-a59a-37617ee3726e" Oct 8 19:48:45.925711 containerd[1594]: time="2024-10-08T19:48:45.925596913Z" level=error msg="StopPodSandbox for \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\" failed" error="failed to destroy network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.925984 kubelet[2985]: E1008 19:48:45.925931 2985 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:48:45.925984 kubelet[2985]: E1008 19:48:45.925980 2985 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440"} Oct 8 19:48:45.926096 kubelet[2985]: E1008 19:48:45.926024 2985 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bca11565-40aa-4871-8cd0-05721317a01c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:48:45.926156 kubelet[2985]: E1008 19:48:45.926098 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bca11565-40aa-4871-8cd0-05721317a01c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75d69567cd-r5pmq" podUID="bca11565-40aa-4871-8cd0-05721317a01c" Oct 8 19:48:45.929126 containerd[1594]: time="2024-10-08T19:48:45.929001111Z" level=error msg="StopPodSandbox for \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\" failed" error="failed to destroy network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.929555 kubelet[2985]: E1008 19:48:45.929423 2985 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:48:45.929555 kubelet[2985]: E1008 19:48:45.929466 2985 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09"} Oct 8 19:48:45.929555 kubelet[2985]: E1008 19:48:45.929500 2985 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e5da717-f897-4c9e-a583-c20fa5c37108\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:48:45.929555 kubelet[2985]: E1008 19:48:45.929531 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e5da717-f897-4c9e-a583-c20fa5c37108\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2494q" podUID="7e5da717-f897-4c9e-a583-c20fa5c37108" Oct 8 19:48:45.935237 containerd[1594]: time="2024-10-08T19:48:45.935180244Z" level=error msg="StopPodSandbox for \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\" failed" error="failed to destroy network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:48:45.935466 kubelet[2985]: E1008 19:48:45.935445 2985 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:48:45.935519 kubelet[2985]: E1008 19:48:45.935492 2985 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0"} Oct 8 19:48:45.935554 kubelet[2985]: E1008 19:48:45.935531 2985 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e15cac1e-985b-429c-869f-52ca0b633720\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:48:45.935611 kubelet[2985]: E1008 19:48:45.935559 2985 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e15cac1e-985b-429c-869f-52ca0b633720\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-vwmgr" podUID="e15cac1e-985b-429c-869f-52ca0b633720" Oct 8 19:48:48.320501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215332969.mount: Deactivated successfully. Oct 8 19:48:48.347150 containerd[1594]: time="2024-10-08T19:48:48.346849087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:48.348821 containerd[1594]: time="2024-10-08T19:48:48.348436062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:48:48.349205 containerd[1594]: time="2024-10-08T19:48:48.349161207Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:48.352718 containerd[1594]: time="2024-10-08T19:48:48.352644728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:48.353951 containerd[1594]: time="2024-10-08T19:48:48.353777327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.511434378s" Oct 8 19:48:48.353951 containerd[1594]: time="2024-10-08T19:48:48.353828569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:48:48.369551 containerd[1594]: time="2024-10-08T19:48:48.369507951Z" level=info msg="CreateContainer within sandbox \"7ab53cebd4122ee878b6287e71b9dac7a72df121856786b763412ccc4b319917\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:48:48.386365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861103475.mount: Deactivated successfully. Oct 8 19:48:48.387805 containerd[1594]: time="2024-10-08T19:48:48.387587297Z" level=info msg="CreateContainer within sandbox \"7ab53cebd4122ee878b6287e71b9dac7a72df121856786b763412ccc4b319917\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0bd6da19a595f685aa8224cca0d0b61b0519b3d5c2e46e5ff74844b8f3bd1d7\"" Oct 8 19:48:48.388486 containerd[1594]: time="2024-10-08T19:48:48.388317962Z" level=info msg="StartContainer for \"e0bd6da19a595f685aa8224cca0d0b61b0519b3d5c2e46e5ff74844b8f3bd1d7\"" Oct 8 19:48:48.449371 containerd[1594]: time="2024-10-08T19:48:48.448652169Z" level=info msg="StartContainer for \"e0bd6da19a595f685aa8224cca0d0b61b0519b3d5c2e46e5ff74844b8f3bd1d7\" returns successfully" Oct 8 19:48:48.589258 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:48:48.589395 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:48:48.900199 kubelet[2985]: I1008 19:48:48.900133 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-wcx9d" podStartSLOduration=1.147995917 podStartE2EDuration="13.900086186s" podCreationTimestamp="2024-10-08 19:48:35 +0000 UTC" firstStartedPulling="2024-10-08 19:48:35.602019749 +0000 UTC m=+22.088580932" lastFinishedPulling="2024-10-08 19:48:48.354109938 +0000 UTC m=+34.840671201" observedRunningTime="2024-10-08 19:48:48.899138993 +0000 UTC m=+35.385700216" watchObservedRunningTime="2024-10-08 19:48:48.900086186 +0000 UTC m=+35.386647449" Oct 8 19:48:50.288316 kernel: bpftool[4134]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:48:50.484401 systemd-networkd[1253]: vxlan.calico: Link UP Oct 8 19:48:50.484413 systemd-networkd[1253]: vxlan.calico: Gained carrier Oct 8 19:48:52.089522 systemd-networkd[1253]: vxlan.calico: Gained IPv6LL Oct 8 19:48:56.682721 containerd[1594]: time="2024-10-08T19:48:56.682009975Z" level=info msg="StopPodSandbox for \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\"" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.771 [INFO][4230] k8s.go 608: Cleaning up netns ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.771 [INFO][4230] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" iface="eth0" netns="/var/run/netns/cni-ab69c995-a0d4-bf25-32af-ada29e167806" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.772 [INFO][4230] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" iface="eth0" netns="/var/run/netns/cni-ab69c995-a0d4-bf25-32af-ada29e167806" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.773 [INFO][4230] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" iface="eth0" netns="/var/run/netns/cni-ab69c995-a0d4-bf25-32af-ada29e167806" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.773 [INFO][4230] k8s.go 615: Releasing IP address(es) ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.773 [INFO][4230] utils.go 188: Calico CNI releasing IP address ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.837 [INFO][4236] ipam_plugin.go 417: Releasing address using handleID ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.837 [INFO][4236] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.837 [INFO][4236] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.847 [WARNING][4236] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.848 [INFO][4236] ipam_plugin.go 445: Releasing address using workloadID ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.850 [INFO][4236] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:56.856693 containerd[1594]: 2024-10-08 19:48:56.854 [INFO][4230] k8s.go 621: Teardown processing complete. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:48:56.859697 containerd[1594]: time="2024-10-08T19:48:56.857195846Z" level=info msg="TearDown network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\" successfully" Oct 8 19:48:56.859697 containerd[1594]: time="2024-10-08T19:48:56.857229487Z" level=info msg="StopPodSandbox for \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\" returns successfully" Oct 8 19:48:56.860786 containerd[1594]: time="2024-10-08T19:48:56.860754169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2494q,Uid:7e5da717-f897-4c9e-a583-c20fa5c37108,Namespace:calico-system,Attempt:1,}" Oct 8 19:48:56.861702 systemd[1]: run-netns-cni\x2dab69c995\x2da0d4\x2dbf25\x2d32af\x2dada29e167806.mount: Deactivated successfully. Oct 8 19:48:57.091060 systemd-networkd[1253]: cali19e2135d437: Link UP Oct 8 19:48:57.092247 systemd-networkd[1253]: cali19e2135d437: Gained carrier Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:56.948 [INFO][4244] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0 csi-node-driver- calico-system 7e5da717-f897-4c9e-a583-c20fa5c37108 713 0 2024-10-08 19:48:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975-2-2-d-c7549a9f5e csi-node-driver-2494q eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali19e2135d437 [] []}} ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:56.949 [INFO][4244] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:56.982 [INFO][4255] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" HandleID="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:56.994 [INFO][4255] ipam_plugin.go 270: Auto assigning IP ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" HandleID="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289b50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-d-c7549a9f5e", "pod":"csi-node-driver-2494q", "timestamp":"2024-10-08 19:48:56.982689834 +0000 UTC"}, Hostname:"ci-3975-2-2-d-c7549a9f5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:56.995 [INFO][4255] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:56.995 [INFO][4255] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:56.995 [INFO][4255] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-d-c7549a9f5e' Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.001 [INFO][4255] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.027 [INFO][4255] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.047 [INFO][4255] ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.051 [INFO][4255] ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.056 [INFO][4255] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.056 [INFO][4255] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.061 [INFO][4255] ipam.go 1685: Creating new handle: k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.073 [INFO][4255] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.083 [INFO][4255] ipam.go 1216: Successfully claimed IPs: [192.168.19.129/26] block=192.168.19.128/26 handle="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.084 [INFO][4255] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.129/26] handle="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.084 [INFO][4255] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:57.125704 containerd[1594]: 2024-10-08 19:48:57.084 [INFO][4255] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.129/26] IPv6=[] ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" HandleID="k8s-pod-network.4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:57.127356 containerd[1594]: 2024-10-08 19:48:57.087 [INFO][4244] k8s.go 386: Populated endpoint ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e5da717-f897-4c9e-a583-c20fa5c37108", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"", Pod:"csi-node-driver-2494q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19e2135d437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:57.127356 containerd[1594]: 2024-10-08 19:48:57.087 [INFO][4244] k8s.go 387: Calico CNI using IPs: [192.168.19.129/32] ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:57.127356 containerd[1594]: 2024-10-08 19:48:57.087 [INFO][4244] dataplane_linux.go 68: Setting the host side veth name to cali19e2135d437 ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:57.127356 containerd[1594]: 2024-10-08 19:48:57.092 [INFO][4244] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:57.127356 containerd[1594]: 2024-10-08 19:48:57.093 [INFO][4244] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e5da717-f897-4c9e-a583-c20fa5c37108", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e", Pod:"csi-node-driver-2494q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19e2135d437", MAC:"9a:0f:b9:2e:cf:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:57.127356 containerd[1594]: 2024-10-08 19:48:57.109 [INFO][4244] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e" Namespace="calico-system" Pod="csi-node-driver-2494q" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:48:57.146065 containerd[1594]: time="2024-10-08T19:48:57.145280630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:57.146065 containerd[1594]: time="2024-10-08T19:48:57.145854850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:57.146065 containerd[1594]: time="2024-10-08T19:48:57.145871570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:57.146065 containerd[1594]: time="2024-10-08T19:48:57.145881611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:57.200153 containerd[1594]: time="2024-10-08T19:48:57.200115970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2494q,Uid:7e5da717-f897-4c9e-a583-c20fa5c37108,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e\"" Oct 8 19:48:57.201812 containerd[1594]: time="2024-10-08T19:48:57.201697425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:48:57.685649 containerd[1594]: time="2024-10-08T19:48:57.684442317Z" level=info msg="StopPodSandbox for \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\"" Oct 8 19:48:57.688494 containerd[1594]: time="2024-10-08T19:48:57.686768798Z" level=info msg="StopPodSandbox for \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\"" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.750 [INFO][4336] k8s.go 608: Cleaning up netns ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.750 [INFO][4336] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" iface="eth0" netns="/var/run/netns/cni-9049ff06-ad7e-d47d-d299-0691c30e1a83" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.752 [INFO][4336] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" iface="eth0" netns="/var/run/netns/cni-9049ff06-ad7e-d47d-d299-0691c30e1a83" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.753 [INFO][4336] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" iface="eth0" netns="/var/run/netns/cni-9049ff06-ad7e-d47d-d299-0691c30e1a83" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.753 [INFO][4336] k8s.go 615: Releasing IP address(es) ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.753 [INFO][4336] utils.go 188: Calico CNI releasing IP address ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.777 [INFO][4354] ipam_plugin.go 417: Releasing address using handleID ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.777 [INFO][4354] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.777 [INFO][4354] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.791 [WARNING][4354] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.791 [INFO][4354] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.794 [INFO][4354] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:57.799192 containerd[1594]: 2024-10-08 19:48:57.796 [INFO][4336] k8s.go 621: Teardown processing complete. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:48:57.800160 containerd[1594]: time="2024-10-08T19:48:57.799904479Z" level=info msg="TearDown network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\" successfully" Oct 8 19:48:57.800160 containerd[1594]: time="2024-10-08T19:48:57.800117366Z" level=info msg="StopPodSandbox for \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\" returns successfully" Oct 8 19:48:57.801773 containerd[1594]: time="2024-10-08T19:48:57.801675340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d69567cd-r5pmq,Uid:bca11565-40aa-4871-8cd0-05721317a01c,Namespace:calico-system,Attempt:1,}" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.747 [INFO][4344] k8s.go 608: Cleaning up netns ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.747 [INFO][4344] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" iface="eth0" netns="/var/run/netns/cni-bee23d8d-54b6-9afc-fbbd-034aec904a43" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.748 [INFO][4344] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" iface="eth0" netns="/var/run/netns/cni-bee23d8d-54b6-9afc-fbbd-034aec904a43" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.752 [INFO][4344] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" iface="eth0" netns="/var/run/netns/cni-bee23d8d-54b6-9afc-fbbd-034aec904a43" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.752 [INFO][4344] k8s.go 615: Releasing IP address(es) ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.752 [INFO][4344] utils.go 188: Calico CNI releasing IP address ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.783 [INFO][4353] ipam_plugin.go 417: Releasing address using handleID ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.783 [INFO][4353] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.794 [INFO][4353] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.811 [WARNING][4353] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.811 [INFO][4353] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.814 [INFO][4353] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:57.819349 containerd[1594]: 2024-10-08 19:48:57.816 [INFO][4344] k8s.go 621: Teardown processing complete. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:48:57.819816 containerd[1594]: time="2024-10-08T19:48:57.819500038Z" level=info msg="TearDown network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\" successfully" Oct 8 19:48:57.819816 containerd[1594]: time="2024-10-08T19:48:57.819536479Z" level=info msg="StopPodSandbox for \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\" returns successfully" Oct 8 19:48:57.820887 containerd[1594]: time="2024-10-08T19:48:57.820857765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s4sm5,Uid:21948a20-d7e6-467b-a59a-37617ee3726e,Namespace:kube-system,Attempt:1,}" Oct 8 19:48:57.864034 systemd[1]: run-netns-cni\x2d9049ff06\x2dad7e\x2dd47d\x2dd299\x2d0691c30e1a83.mount: Deactivated successfully. Oct 8 19:48:57.864329 systemd[1]: run-netns-cni\x2dbee23d8d\x2d54b6\x2d9afc\x2dfbbd\x2d034aec904a43.mount: Deactivated successfully. Oct 8 19:48:57.986709 systemd-networkd[1253]: cali52e0fef1956: Link UP Oct 8 19:48:57.987487 systemd-networkd[1253]: cali52e0fef1956: Gained carrier Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.892 [INFO][4375] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0 coredns-76f75df574- kube-system 21948a20-d7e6-467b-a59a-37617ee3726e 723 0 2024-10-08 19:48:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-d-c7549a9f5e coredns-76f75df574-s4sm5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali52e0fef1956 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.892 [INFO][4375] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.926 [INFO][4390] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" HandleID="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.945 [INFO][4390] ipam_plugin.go 270: Auto assigning IP ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" HandleID="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003785d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-d-c7549a9f5e", "pod":"coredns-76f75df574-s4sm5", "timestamp":"2024-10-08 19:48:57.926971003 +0000 UTC"}, Hostname:"ci-3975-2-2-d-c7549a9f5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.945 [INFO][4390] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.945 [INFO][4390] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.945 [INFO][4390] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-d-c7549a9f5e' Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.948 [INFO][4390] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.954 [INFO][4390] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.959 [INFO][4390] ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.961 [INFO][4390] ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.963 [INFO][4390] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.963 [INFO][4390] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.966 [INFO][4390] ipam.go 1685: Creating new handle: k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536 Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.971 [INFO][4390] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.980 [INFO][4390] ipam.go 1216: Successfully claimed IPs: [192.168.19.130/26] block=192.168.19.128/26 handle="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.980 [INFO][4390] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.130/26] handle="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.980 [INFO][4390] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:58.013551 containerd[1594]: 2024-10-08 19:48:57.980 [INFO][4390] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.130/26] IPv6=[] ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" HandleID="k8s-pod-network.e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:58.014104 containerd[1594]: 2024-10-08 19:48:57.983 [INFO][4375] k8s.go 386: Populated endpoint ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21948a20-d7e6-467b-a59a-37617ee3726e", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"", Pod:"coredns-76f75df574-s4sm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52e0fef1956", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:58.014104 containerd[1594]: 2024-10-08 19:48:57.983 [INFO][4375] k8s.go 387: Calico CNI using IPs: [192.168.19.130/32] ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:58.014104 containerd[1594]: 2024-10-08 19:48:57.983 [INFO][4375] dataplane_linux.go 68: Setting the host side veth name to cali52e0fef1956 ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:58.014104 containerd[1594]: 2024-10-08 19:48:57.988 [INFO][4375] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:58.014104 containerd[1594]: 2024-10-08 19:48:57.990 [INFO][4375] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21948a20-d7e6-467b-a59a-37617ee3726e", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536", Pod:"coredns-76f75df574-s4sm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52e0fef1956", MAC:"2a:69:5a:a2:02:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:58.014104 containerd[1594]: 2024-10-08 19:48:58.010 [INFO][4375] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536" Namespace="kube-system" Pod="coredns-76f75df574-s4sm5" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:48:58.053219 containerd[1594]: time="2024-10-08T19:48:58.052222265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:58.053219 containerd[1594]: time="2024-10-08T19:48:58.052448072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:58.053219 containerd[1594]: time="2024-10-08T19:48:58.052567477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:58.053219 containerd[1594]: time="2024-10-08T19:48:58.052579597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:58.067227 systemd-networkd[1253]: cali134de79f57f: Link UP Oct 8 19:48:58.068101 systemd-networkd[1253]: cali134de79f57f: Gained carrier Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.891 [INFO][4365] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0 calico-kube-controllers-75d69567cd- calico-system bca11565-40aa-4871-8cd0-05721317a01c 724 0 2024-10-08 19:48:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75d69567cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975-2-2-d-c7549a9f5e calico-kube-controllers-75d69567cd-r5pmq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali134de79f57f [] []}} ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.892 [INFO][4365] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.936 [INFO][4394] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" HandleID="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.950 [INFO][4394] ipam_plugin.go 270: Auto assigning IP ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" HandleID="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001148c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-d-c7549a9f5e", "pod":"calico-kube-controllers-75d69567cd-r5pmq", "timestamp":"2024-10-08 19:48:57.936035437 +0000 UTC"}, Hostname:"ci-3975-2-2-d-c7549a9f5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.950 [INFO][4394] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.981 [INFO][4394] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.981 [INFO][4394] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-d-c7549a9f5e' Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.987 [INFO][4394] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:57.999 [INFO][4394] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.014 [INFO][4394] ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.017 [INFO][4394] ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.022 [INFO][4394] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.022 [INFO][4394] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.024 [INFO][4394] ipam.go 1685: Creating new handle: k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823 Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.043 [INFO][4394] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.052 [INFO][4394] ipam.go 1216: Successfully claimed IPs: [192.168.19.131/26] block=192.168.19.128/26 handle="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.053 [INFO][4394] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.131/26] handle="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.053 [INFO][4394] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:58.094194 containerd[1594]: 2024-10-08 19:48:58.053 [INFO][4394] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.131/26] IPv6=[] ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" HandleID="k8s-pod-network.468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:58.094843 containerd[1594]: 2024-10-08 19:48:58.060 [INFO][4365] k8s.go 386: Populated endpoint ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0", GenerateName:"calico-kube-controllers-75d69567cd-", Namespace:"calico-system", SelfLink:"", UID:"bca11565-40aa-4871-8cd0-05721317a01c", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d69567cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"", Pod:"calico-kube-controllers-75d69567cd-r5pmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali134de79f57f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:58.094843 containerd[1594]: 2024-10-08 19:48:58.060 [INFO][4365] k8s.go 387: Calico CNI using IPs: [192.168.19.131/32] ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:58.094843 containerd[1594]: 2024-10-08 19:48:58.060 [INFO][4365] dataplane_linux.go 68: Setting the host side veth name to cali134de79f57f ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:58.094843 containerd[1594]: 2024-10-08 19:48:58.068 [INFO][4365] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:58.094843 containerd[1594]: 2024-10-08 19:48:58.069 [INFO][4365] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0", GenerateName:"calico-kube-controllers-75d69567cd-", Namespace:"calico-system", SelfLink:"", UID:"bca11565-40aa-4871-8cd0-05721317a01c", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d69567cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823", Pod:"calico-kube-controllers-75d69567cd-r5pmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali134de79f57f", MAC:"0a:dd:3d:ed:3e:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:58.094843 containerd[1594]: 2024-10-08 19:48:58.087 [INFO][4365] k8s.go 500: Wrote updated endpoint to datastore ContainerID="468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823" Namespace="calico-system" Pod="calico-kube-controllers-75d69567cd-r5pmq" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:48:58.130205 containerd[1594]: time="2024-10-08T19:48:58.130006841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:58.130205 containerd[1594]: time="2024-10-08T19:48:58.130089924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:58.130205 containerd[1594]: time="2024-10-08T19:48:58.130109645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:58.130205 containerd[1594]: time="2024-10-08T19:48:58.130129845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:58.132401 containerd[1594]: time="2024-10-08T19:48:58.132258919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s4sm5,Uid:21948a20-d7e6-467b-a59a-37617ee3726e,Namespace:kube-system,Attempt:1,} returns sandbox id \"e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536\"" Oct 8 19:48:58.140739 containerd[1594]: time="2024-10-08T19:48:58.140590248Z" level=info msg="CreateContainer within sandbox \"e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:48:58.172306 containerd[1594]: time="2024-10-08T19:48:58.172074579Z" level=info msg="CreateContainer within sandbox \"e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3cc45a73e3b97ce05839a33b7a8a84ff562aaa00953893b50d0d0b2ba04f8737\"" Oct 8 19:48:58.173512 containerd[1594]: time="2024-10-08T19:48:58.173477148Z" level=info msg="StartContainer for \"3cc45a73e3b97ce05839a33b7a8a84ff562aaa00953893b50d0d0b2ba04f8737\"" Oct 8 19:48:58.214757 containerd[1594]: time="2024-10-08T19:48:58.214709257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d69567cd-r5pmq,Uid:bca11565-40aa-4871-8cd0-05721317a01c,Namespace:calico-system,Attempt:1,} returns sandbox id \"468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823\"" Oct 8 19:48:58.243442 containerd[1594]: time="2024-10-08T19:48:58.241308020Z" level=info msg="StartContainer for \"3cc45a73e3b97ce05839a33b7a8a84ff562aaa00953893b50d0d0b2ba04f8737\" returns successfully" Oct 8 19:48:58.501990 systemd-networkd[1253]: cali19e2135d437: Gained IPv6LL Oct 8 19:48:58.611732 containerd[1594]: time="2024-10-08T19:48:58.611680339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:58.612536 containerd[1594]: time="2024-10-08T19:48:58.612506128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:48:58.613161 containerd[1594]: time="2024-10-08T19:48:58.612931062Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:58.615213 containerd[1594]: time="2024-10-08T19:48:58.615170180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:48:58.616016 containerd[1594]: time="2024-10-08T19:48:58.615971448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.414239301s" Oct 8 19:48:58.616082 containerd[1594]: time="2024-10-08T19:48:58.616018089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:48:58.617333 containerd[1594]: time="2024-10-08T19:48:58.617266613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:48:58.619918 containerd[1594]: time="2024-10-08T19:48:58.619863623Z" level=info msg="CreateContainer within sandbox \"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:48:58.640520 containerd[1594]: time="2024-10-08T19:48:58.640468937Z" level=info msg="CreateContainer within sandbox \"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3043e3b80fab8e74b915abc668f9a1746e27a544a63eb6178ca1dfdfbb53be47\"" Oct 8 19:48:58.641646 containerd[1594]: time="2024-10-08T19:48:58.641403569Z" level=info msg="StartContainer for \"3043e3b80fab8e74b915abc668f9a1746e27a544a63eb6178ca1dfdfbb53be47\"" Oct 8 19:48:58.681832 containerd[1594]: time="2024-10-08T19:48:58.681438917Z" level=info msg="StopPodSandbox for \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\"" Oct 8 19:48:58.769849 containerd[1594]: time="2024-10-08T19:48:58.769596493Z" level=info msg="StartContainer for \"3043e3b80fab8e74b915abc668f9a1746e27a544a63eb6178ca1dfdfbb53be47\" returns successfully" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.755 [INFO][4592] k8s.go 608: Cleaning up netns ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.761 [INFO][4592] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" iface="eth0" netns="/var/run/netns/cni-425064e1-8ff3-3642-1070-378252a481a2" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.761 [INFO][4592] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" iface="eth0" netns="/var/run/netns/cni-425064e1-8ff3-3642-1070-378252a481a2" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.761 [INFO][4592] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" iface="eth0" netns="/var/run/netns/cni-425064e1-8ff3-3642-1070-378252a481a2" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.762 [INFO][4592] k8s.go 615: Releasing IP address(es) ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.762 [INFO][4592] utils.go 188: Calico CNI releasing IP address ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.796 [INFO][4607] ipam_plugin.go 417: Releasing address using handleID ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.796 [INFO][4607] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.796 [INFO][4607] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.806 [WARNING][4607] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.806 [INFO][4607] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.808 [INFO][4607] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:58.812131 containerd[1594]: 2024-10-08 19:48:58.810 [INFO][4592] k8s.go 621: Teardown processing complete. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:48:58.812841 containerd[1594]: time="2024-10-08T19:48:58.812333255Z" level=info msg="TearDown network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\" successfully" Oct 8 19:48:58.812841 containerd[1594]: time="2024-10-08T19:48:58.812374776Z" level=info msg="StopPodSandbox for \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\" returns successfully" Oct 8 19:48:58.813547 containerd[1594]: time="2024-10-08T19:48:58.813143203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vwmgr,Uid:e15cac1e-985b-429c-869f-52ca0b633720,Namespace:kube-system,Attempt:1,}" Oct 8 19:48:58.863465 systemd[1]: run-netns-cni\x2d425064e1\x2d8ff3\x2d3642\x2d1070\x2d378252a481a2.mount: Deactivated successfully. Oct 8 19:48:58.962704 kubelet[2985]: I1008 19:48:58.962560 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s4sm5" podStartSLOduration=31.962515061 podStartE2EDuration="31.962515061s" podCreationTimestamp="2024-10-08 19:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:58.942051752 +0000 UTC m=+45.428613055" watchObservedRunningTime="2024-10-08 19:48:58.962515061 +0000 UTC m=+45.449076284" Oct 8 19:48:59.007655 systemd-networkd[1253]: cali71f0d4257f5: Link UP Oct 8 19:48:59.008256 systemd-networkd[1253]: cali71f0d4257f5: Gained carrier Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.872 [INFO][4614] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0 coredns-76f75df574- kube-system e15cac1e-985b-429c-869f-52ca0b633720 738 0 2024-10-08 19:48:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-d-c7549a9f5e coredns-76f75df574-vwmgr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali71f0d4257f5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.872 [INFO][4614] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.903 [INFO][4624] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" HandleID="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.923 [INFO][4624] ipam_plugin.go 270: Auto assigning IP ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" HandleID="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c730), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-d-c7549a9f5e", "pod":"coredns-76f75df574-vwmgr", "timestamp":"2024-10-08 19:48:58.903796105 +0000 UTC"}, Hostname:"ci-3975-2-2-d-c7549a9f5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.924 [INFO][4624] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.925 [INFO][4624] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.926 [INFO][4624] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-d-c7549a9f5e' Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.931 [INFO][4624] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.938 [INFO][4624] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.957 [INFO][4624] ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.961 [INFO][4624] ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.968 [INFO][4624] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.968 [INFO][4624] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.975 [INFO][4624] ipam.go 1685: Creating new handle: k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59 Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:58.986 [INFO][4624] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:59.000 [INFO][4624] ipam.go 1216: Successfully claimed IPs: [192.168.19.132/26] block=192.168.19.128/26 handle="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:59.000 [INFO][4624] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.132/26] handle="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:59.000 [INFO][4624] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:48:59.026832 containerd[1594]: 2024-10-08 19:48:59.000 [INFO][4624] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.132/26] IPv6=[] ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" HandleID="k8s-pod-network.c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:59.027421 containerd[1594]: 2024-10-08 19:48:59.004 [INFO][4614] k8s.go 386: Populated endpoint ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e15cac1e-985b-429c-869f-52ca0b633720", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"", Pod:"coredns-76f75df574-vwmgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71f0d4257f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:59.027421 containerd[1594]: 2024-10-08 19:48:59.004 [INFO][4614] k8s.go 387: Calico CNI using IPs: [192.168.19.132/32] ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:59.027421 containerd[1594]: 2024-10-08 19:48:59.004 [INFO][4614] dataplane_linux.go 68: Setting the host side veth name to cali71f0d4257f5 ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:59.027421 containerd[1594]: 2024-10-08 19:48:59.008 [INFO][4614] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:59.027421 containerd[1594]: 2024-10-08 19:48:59.008 [INFO][4614] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e15cac1e-985b-429c-869f-52ca0b633720", ResourceVersion:"738", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59", Pod:"coredns-76f75df574-vwmgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71f0d4257f5", MAC:"82:07:f3:51:2a:fe", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:48:59.027421 containerd[1594]: 2024-10-08 19:48:59.021 [INFO][4614] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59" Namespace="kube-system" Pod="coredns-76f75df574-vwmgr" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:48:59.061920 containerd[1594]: time="2024-10-08T19:48:59.061585536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:48:59.061920 containerd[1594]: time="2024-10-08T19:48:59.061704300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:59.061920 containerd[1594]: time="2024-10-08T19:48:59.061734261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:48:59.061920 containerd[1594]: time="2024-10-08T19:48:59.061760022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:48:59.119153 containerd[1594]: time="2024-10-08T19:48:59.119092610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-vwmgr,Uid:e15cac1e-985b-429c-869f-52ca0b633720,Namespace:kube-system,Attempt:1,} returns sandbox id \"c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59\"" Oct 8 19:48:59.122322 containerd[1594]: time="2024-10-08T19:48:59.122011311Z" level=info msg="CreateContainer within sandbox \"c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:48:59.143627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811194637.mount: Deactivated successfully. Oct 8 19:48:59.144651 containerd[1594]: time="2024-10-08T19:48:59.144525131Z" level=info msg="CreateContainer within sandbox \"c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e17cc77fc97ab8ca1afcd9223d9b3ab9af2f9cc7ee16dd1e6acc5f420ac9f7c4\"" Oct 8 19:48:59.146994 containerd[1594]: time="2024-10-08T19:48:59.145524006Z" level=info msg="StartContainer for \"e17cc77fc97ab8ca1afcd9223d9b3ab9af2f9cc7ee16dd1e6acc5f420ac9f7c4\"" Oct 8 19:48:59.204905 containerd[1594]: time="2024-10-08T19:48:59.204857143Z" level=info msg="StartContainer for \"e17cc77fc97ab8ca1afcd9223d9b3ab9af2f9cc7ee16dd1e6acc5f420ac9f7c4\" returns successfully" Oct 8 19:48:59.321501 systemd-networkd[1253]: cali52e0fef1956: Gained IPv6LL Oct 8 19:48:59.449977 systemd-networkd[1253]: cali134de79f57f: Gained IPv6LL Oct 8 19:48:59.961677 kubelet[2985]: I1008 19:48:59.961231 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-vwmgr" podStartSLOduration=32.961175447 podStartE2EDuration="32.961175447s" podCreationTimestamp="2024-10-08 19:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:48:59.94166969 +0000 UTC m=+46.428230913" watchObservedRunningTime="2024-10-08 19:48:59.961175447 +0000 UTC m=+46.447736670" Oct 8 19:49:00.799273 systemd-networkd[1253]: cali71f0d4257f5: Gained IPv6LL Oct 8 19:49:01.115402 containerd[1594]: time="2024-10-08T19:49:01.115057181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:01.116398 containerd[1594]: time="2024-10-08T19:49:01.116358226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:49:01.117675 containerd[1594]: time="2024-10-08T19:49:01.117616310Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:01.120368 containerd[1594]: time="2024-10-08T19:49:01.120084276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:01.120782 containerd[1594]: time="2024-10-08T19:49:01.120751019Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 2.503428084s" Oct 8 19:49:01.120831 containerd[1594]: time="2024-10-08T19:49:01.120782820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:49:01.125425 containerd[1594]: time="2024-10-08T19:49:01.125396740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:49:01.154558 containerd[1594]: time="2024-10-08T19:49:01.154520870Z" level=info msg="CreateContainer within sandbox \"468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:49:01.174720 containerd[1594]: time="2024-10-08T19:49:01.174022066Z" level=info msg="CreateContainer within sandbox \"468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dcd8253e8fcdcc5c4271cdfd4bd08035678ab4055fd6241a20bc380109add37f\"" Oct 8 19:49:01.176706 containerd[1594]: time="2024-10-08T19:49:01.176675998Z" level=info msg="StartContainer for \"dcd8253e8fcdcc5c4271cdfd4bd08035678ab4055fd6241a20bc380109add37f\"" Oct 8 19:49:01.182069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2868559205.mount: Deactivated successfully. Oct 8 19:49:01.264053 containerd[1594]: time="2024-10-08T19:49:01.264002547Z" level=info msg="StartContainer for \"dcd8253e8fcdcc5c4271cdfd4bd08035678ab4055fd6241a20bc380109add37f\" returns successfully" Oct 8 19:49:01.684259 systemd[1]: Started sshd@8-168.119.51.132:22-81.161.238.160:41368.service - OpenSSH per-connection server daemon (81.161.238.160:41368). Oct 8 19:49:01.957730 sshd[4782]: Connection closed by authenticating user root 81.161.238.160 port 41368 [preauth] Oct 8 19:49:01.961521 systemd[1]: sshd@8-168.119.51.132:22-81.161.238.160:41368.service: Deactivated successfully. Oct 8 19:49:02.044308 kubelet[2985]: I1008 19:49:02.041891 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75d69567cd-r5pmq" podStartSLOduration=24.138025362 podStartE2EDuration="27.041837926s" podCreationTimestamp="2024-10-08 19:48:35 +0000 UTC" firstStartedPulling="2024-10-08 19:48:58.217604998 +0000 UTC m=+44.704166221" lastFinishedPulling="2024-10-08 19:49:01.121417562 +0000 UTC m=+47.607978785" observedRunningTime="2024-10-08 19:49:01.988406952 +0000 UTC m=+48.474968175" watchObservedRunningTime="2024-10-08 19:49:02.041837926 +0000 UTC m=+48.528399149" Oct 8 19:49:02.773339 containerd[1594]: time="2024-10-08T19:49:02.773009529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:02.774240 containerd[1594]: time="2024-10-08T19:49:02.774176570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:49:02.776150 containerd[1594]: time="2024-10-08T19:49:02.776098036Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:02.777482 containerd[1594]: time="2024-10-08T19:49:02.777442323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:02.778535 containerd[1594]: time="2024-10-08T19:49:02.778381316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.652803249s" Oct 8 19:49:02.778535 containerd[1594]: time="2024-10-08T19:49:02.778432957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:49:02.794824 containerd[1594]: time="2024-10-08T19:49:02.794771044Z" level=info msg="CreateContainer within sandbox \"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:49:02.813888 containerd[1594]: time="2024-10-08T19:49:02.813823465Z" level=info msg="CreateContainer within sandbox \"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"45c1d881d1e051bf8e1ec6e1e764e6eed3a98977a86669de4d33e9e01cd8f44e\"" Oct 8 19:49:02.818516 containerd[1594]: time="2024-10-08T19:49:02.818280500Z" level=info msg="StartContainer for \"45c1d881d1e051bf8e1ec6e1e764e6eed3a98977a86669de4d33e9e01cd8f44e\"" Oct 8 19:49:02.889815 containerd[1594]: time="2024-10-08T19:49:02.889664776Z" level=info msg="StartContainer for \"45c1d881d1e051bf8e1ec6e1e764e6eed3a98977a86669de4d33e9e01cd8f44e\" returns successfully" Oct 8 19:49:02.976808 kubelet[2985]: I1008 19:49:02.976547 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-2494q" podStartSLOduration=22.399007349 podStartE2EDuration="27.976496708s" podCreationTimestamp="2024-10-08 19:48:35 +0000 UTC" firstStartedPulling="2024-10-08 19:48:57.201328532 +0000 UTC m=+43.687889715" lastFinishedPulling="2024-10-08 19:49:02.778817851 +0000 UTC m=+49.265379074" observedRunningTime="2024-10-08 19:49:02.976061373 +0000 UTC m=+49.462622636" watchObservedRunningTime="2024-10-08 19:49:02.976496708 +0000 UTC m=+49.463057931" Oct 8 19:49:03.821867 kubelet[2985]: I1008 19:49:03.821763 2985 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:49:03.821867 kubelet[2985]: I1008 19:49:03.821842 2985 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:49:09.241379 kubelet[2985]: I1008 19:49:09.240730 2985 topology_manager.go:215] "Topology Admit Handler" podUID="2f9ff4dc-105e-41a1-a644-db16e2e42690" podNamespace="calico-apiserver" podName="calico-apiserver-7758dc7779-trrsg" Oct 8 19:49:09.255325 kubelet[2985]: I1008 19:49:09.252073 2985 topology_manager.go:215] "Topology Admit Handler" podUID="80ef101a-4cba-4744-a9c8-69d2e6d0c7f9" podNamespace="calico-apiserver" podName="calico-apiserver-7758dc7779-ngz5t" Oct 8 19:49:09.280211 kubelet[2985]: I1008 19:49:09.280092 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/80ef101a-4cba-4744-a9c8-69d2e6d0c7f9-calico-apiserver-certs\") pod \"calico-apiserver-7758dc7779-ngz5t\" (UID: \"80ef101a-4cba-4744-a9c8-69d2e6d0c7f9\") " pod="calico-apiserver/calico-apiserver-7758dc7779-ngz5t" Oct 8 19:49:09.281187 kubelet[2985]: I1008 19:49:09.281131 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmktl\" (UniqueName: \"kubernetes.io/projected/80ef101a-4cba-4744-a9c8-69d2e6d0c7f9-kube-api-access-wmktl\") pod \"calico-apiserver-7758dc7779-ngz5t\" (UID: \"80ef101a-4cba-4744-a9c8-69d2e6d0c7f9\") " pod="calico-apiserver/calico-apiserver-7758dc7779-ngz5t" Oct 8 19:49:09.281762 kubelet[2985]: I1008 19:49:09.281745 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f9ff4dc-105e-41a1-a644-db16e2e42690-calico-apiserver-certs\") pod \"calico-apiserver-7758dc7779-trrsg\" (UID: \"2f9ff4dc-105e-41a1-a644-db16e2e42690\") " pod="calico-apiserver/calico-apiserver-7758dc7779-trrsg" Oct 8 19:49:09.282263 kubelet[2985]: I1008 19:49:09.282198 2985 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzq9\" (UniqueName: \"kubernetes.io/projected/2f9ff4dc-105e-41a1-a644-db16e2e42690-kube-api-access-thzq9\") pod \"calico-apiserver-7758dc7779-trrsg\" (UID: \"2f9ff4dc-105e-41a1-a644-db16e2e42690\") " pod="calico-apiserver/calico-apiserver-7758dc7779-trrsg" Oct 8 19:49:09.384966 kubelet[2985]: E1008 19:49:09.384923 2985 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:09.385134 kubelet[2985]: E1008 19:49:09.385008 2985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80ef101a-4cba-4744-a9c8-69d2e6d0c7f9-calico-apiserver-certs podName:80ef101a-4cba-4744-a9c8-69d2e6d0c7f9 nodeName:}" failed. No retries permitted until 2024-10-08 19:49:09.884988891 +0000 UTC m=+56.371550114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/80ef101a-4cba-4744-a9c8-69d2e6d0c7f9-calico-apiserver-certs") pod "calico-apiserver-7758dc7779-ngz5t" (UID: "80ef101a-4cba-4744-a9c8-69d2e6d0c7f9") : secret "calico-apiserver-certs" not found Oct 8 19:49:09.386322 kubelet[2985]: E1008 19:49:09.385357 2985 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:09.386322 kubelet[2985]: E1008 19:49:09.385404 2985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f9ff4dc-105e-41a1-a644-db16e2e42690-calico-apiserver-certs podName:2f9ff4dc-105e-41a1-a644-db16e2e42690 nodeName:}" failed. No retries permitted until 2024-10-08 19:49:09.885392225 +0000 UTC m=+56.371953448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2f9ff4dc-105e-41a1-a644-db16e2e42690-calico-apiserver-certs") pod "calico-apiserver-7758dc7779-trrsg" (UID: "2f9ff4dc-105e-41a1-a644-db16e2e42690") : secret "calico-apiserver-certs" not found Oct 8 19:49:09.888805 kubelet[2985]: E1008 19:49:09.888731 2985 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:09.889440 kubelet[2985]: E1008 19:49:09.888848 2985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f9ff4dc-105e-41a1-a644-db16e2e42690-calico-apiserver-certs podName:2f9ff4dc-105e-41a1-a644-db16e2e42690 nodeName:}" failed. No retries permitted until 2024-10-08 19:49:10.888821866 +0000 UTC m=+57.375383089 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2f9ff4dc-105e-41a1-a644-db16e2e42690-calico-apiserver-certs") pod "calico-apiserver-7758dc7779-trrsg" (UID: "2f9ff4dc-105e-41a1-a644-db16e2e42690") : secret "calico-apiserver-certs" not found Oct 8 19:49:09.889574 kubelet[2985]: E1008 19:49:09.889470 2985 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:49:09.889574 kubelet[2985]: E1008 19:49:09.889537 2985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80ef101a-4cba-4744-a9c8-69d2e6d0c7f9-calico-apiserver-certs podName:80ef101a-4cba-4744-a9c8-69d2e6d0c7f9 nodeName:}" failed. No retries permitted until 2024-10-08 19:49:10.88951577 +0000 UTC m=+57.376077033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/80ef101a-4cba-4744-a9c8-69d2e6d0c7f9-calico-apiserver-certs") pod "calico-apiserver-7758dc7779-ngz5t" (UID: "80ef101a-4cba-4744-a9c8-69d2e6d0c7f9") : secret "calico-apiserver-certs" not found Oct 8 19:49:11.050892 containerd[1594]: time="2024-10-08T19:49:11.050461046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758dc7779-trrsg,Uid:2f9ff4dc-105e-41a1-a644-db16e2e42690,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:49:11.066306 containerd[1594]: time="2024-10-08T19:49:11.065181237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758dc7779-ngz5t,Uid:80ef101a-4cba-4744-a9c8-69d2e6d0c7f9,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:49:11.268206 systemd-networkd[1253]: cali873916d8a92: Link UP Oct 8 19:49:11.271018 systemd-networkd[1253]: cali873916d8a92: Gained carrier Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.119 [INFO][4893] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0 calico-apiserver-7758dc7779- calico-apiserver 2f9ff4dc-105e-41a1-a644-db16e2e42690 850 0 2024-10-08 19:49:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7758dc7779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-d-c7549a9f5e calico-apiserver-7758dc7779-trrsg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali873916d8a92 [] []}} ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.119 [INFO][4893] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.181 [INFO][4918] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" HandleID="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.201 [INFO][4918] ipam_plugin.go 270: Auto assigning IP ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" HandleID="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebbc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-d-c7549a9f5e", "pod":"calico-apiserver-7758dc7779-trrsg", "timestamp":"2024-10-08 19:49:11.180668768 +0000 UTC"}, Hostname:"ci-3975-2-2-d-c7549a9f5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.201 [INFO][4918] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.201 [INFO][4918] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.201 [INFO][4918] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-d-c7549a9f5e' Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.205 [INFO][4918] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.211 [INFO][4918] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.230 [INFO][4918] ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.233 [INFO][4918] ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.237 [INFO][4918] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.237 [INFO][4918] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.240 [INFO][4918] ipam.go 1685: Creating new handle: k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872 Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.249 [INFO][4918] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.257 [INFO][4918] ipam.go 1216: Successfully claimed IPs: [192.168.19.133/26] block=192.168.19.128/26 handle="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.257 [INFO][4918] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.133/26] handle="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.257 [INFO][4918] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:11.294694 containerd[1594]: 2024-10-08 19:49:11.257 [INFO][4918] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.133/26] IPv6=[] ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" HandleID="k8s-pod-network.b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" Oct 8 19:49:11.295796 containerd[1594]: 2024-10-08 19:49:11.261 [INFO][4893] k8s.go 386: Populated endpoint ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0", GenerateName:"calico-apiserver-7758dc7779-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f9ff4dc-105e-41a1-a644-db16e2e42690", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7758dc7779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"", Pod:"calico-apiserver-7758dc7779-trrsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali873916d8a92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:11.295796 containerd[1594]: 2024-10-08 19:49:11.261 [INFO][4893] k8s.go 387: Calico CNI using IPs: [192.168.19.133/32] ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" Oct 8 19:49:11.295796 containerd[1594]: 2024-10-08 19:49:11.261 [INFO][4893] dataplane_linux.go 68: Setting the host side veth name to cali873916d8a92 ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" Oct 8 19:49:11.295796 containerd[1594]: 2024-10-08 19:49:11.271 [INFO][4893] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" Oct 8 19:49:11.295796 containerd[1594]: 2024-10-08 19:49:11.274 [INFO][4893] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0", GenerateName:"calico-apiserver-7758dc7779-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f9ff4dc-105e-41a1-a644-db16e2e42690", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7758dc7779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872", Pod:"calico-apiserver-7758dc7779-trrsg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali873916d8a92", MAC:"b2:12:49:9f:af:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:11.295796 containerd[1594]: 2024-10-08 19:49:11.291 [INFO][4893] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-trrsg" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--trrsg-eth0" Oct 8 19:49:11.332663 containerd[1594]: time="2024-10-08T19:49:11.330878905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:11.332663 containerd[1594]: time="2024-10-08T19:49:11.332319955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:11.332663 containerd[1594]: time="2024-10-08T19:49:11.332400078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:11.332663 containerd[1594]: time="2024-10-08T19:49:11.332441279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:11.352949 systemd-networkd[1253]: caliae3a1af4d0d: Link UP Oct 8 19:49:11.353902 systemd-networkd[1253]: caliae3a1af4d0d: Gained carrier Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.154 [INFO][4903] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0 calico-apiserver-7758dc7779- calico-apiserver 80ef101a-4cba-4744-a9c8-69d2e6d0c7f9 852 0 2024-10-08 19:49:09 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7758dc7779 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-d-c7549a9f5e calico-apiserver-7758dc7779-ngz5t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae3a1af4d0d [] []}} ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.154 [INFO][4903] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.213 [INFO][4924] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" HandleID="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.235 [INFO][4924] ipam_plugin.go 270: Auto assigning IP ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" HandleID="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000114a10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-d-c7549a9f5e", "pod":"calico-apiserver-7758dc7779-ngz5t", "timestamp":"2024-10-08 19:49:11.213943724 +0000 UTC"}, Hostname:"ci-3975-2-2-d-c7549a9f5e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.235 [INFO][4924] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.259 [INFO][4924] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.259 [INFO][4924] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-d-c7549a9f5e' Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.262 [INFO][4924] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.274 [INFO][4924] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.293 [INFO][4924] ipam.go 489: Trying affinity for 192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.300 [INFO][4924] ipam.go 155: Attempting to load block cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.304 [INFO][4924] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.304 [INFO][4924] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.307 [INFO][4924] ipam.go 1685: Creating new handle: k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.313 [INFO][4924] ipam.go 1203: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.341 [INFO][4924] ipam.go 1216: Successfully claimed IPs: [192.168.19.134/26] block=192.168.19.128/26 handle="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.341 [INFO][4924] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.134/26] handle="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" host="ci-3975-2-2-d-c7549a9f5e" Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.342 [INFO][4924] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:11.383468 containerd[1594]: 2024-10-08 19:49:11.342 [INFO][4924] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.19.134/26] IPv6=[] ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" HandleID="k8s-pod-network.d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" Oct 8 19:49:11.384064 containerd[1594]: 2024-10-08 19:49:11.346 [INFO][4903] k8s.go 386: Populated endpoint ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0", GenerateName:"calico-apiserver-7758dc7779-", Namespace:"calico-apiserver", SelfLink:"", UID:"80ef101a-4cba-4744-a9c8-69d2e6d0c7f9", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7758dc7779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"", Pod:"calico-apiserver-7758dc7779-ngz5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae3a1af4d0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:11.384064 containerd[1594]: 2024-10-08 19:49:11.346 [INFO][4903] k8s.go 387: Calico CNI using IPs: [192.168.19.134/32] ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" Oct 8 19:49:11.384064 containerd[1594]: 2024-10-08 19:49:11.347 [INFO][4903] dataplane_linux.go 68: Setting the host side veth name to caliae3a1af4d0d ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" Oct 8 19:49:11.384064 containerd[1594]: 2024-10-08 19:49:11.352 [INFO][4903] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" Oct 8 19:49:11.384064 containerd[1594]: 2024-10-08 19:49:11.355 [INFO][4903] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0", GenerateName:"calico-apiserver-7758dc7779-", Namespace:"calico-apiserver", SelfLink:"", UID:"80ef101a-4cba-4744-a9c8-69d2e6d0c7f9", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 49, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7758dc7779", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e", Pod:"calico-apiserver-7758dc7779-ngz5t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae3a1af4d0d", MAC:"e2:fc:aa:60:dd:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:11.384064 containerd[1594]: 2024-10-08 19:49:11.369 [INFO][4903] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e" Namespace="calico-apiserver" Pod="calico-apiserver-7758dc7779-ngz5t" WorkloadEndpoint="ci--3975--2--2--d--c7549a9f5e-k8s-calico--apiserver--7758dc7779--ngz5t-eth0" Oct 8 19:49:11.413726 containerd[1594]: time="2024-10-08T19:49:11.412131007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:49:11.413726 containerd[1594]: time="2024-10-08T19:49:11.412190209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:11.413726 containerd[1594]: time="2024-10-08T19:49:11.412211690Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:49:11.413726 containerd[1594]: time="2024-10-08T19:49:11.412221210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:49:11.432432 containerd[1594]: time="2024-10-08T19:49:11.432390711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758dc7779-trrsg,Uid:2f9ff4dc-105e-41a1-a644-db16e2e42690,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872\"" Oct 8 19:49:11.438035 containerd[1594]: time="2024-10-08T19:49:11.436993950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:49:11.470281 containerd[1594]: time="2024-10-08T19:49:11.470207304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7758dc7779-ngz5t,Uid:80ef101a-4cba-4744-a9c8-69d2e6d0c7f9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e\"" Oct 8 19:49:12.953561 systemd-networkd[1253]: cali873916d8a92: Gained IPv6LL Oct 8 19:49:13.147385 systemd-networkd[1253]: caliae3a1af4d0d: Gained IPv6LL Oct 8 19:49:13.686812 containerd[1594]: time="2024-10-08T19:49:13.686480047Z" level=info msg="StopPodSandbox for \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\"" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.733 [WARNING][5057] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0", GenerateName:"calico-kube-controllers-75d69567cd-", Namespace:"calico-system", SelfLink:"", UID:"bca11565-40aa-4871-8cd0-05721317a01c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d69567cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823", Pod:"calico-kube-controllers-75d69567cd-r5pmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali134de79f57f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.733 [INFO][5057] k8s.go 608: Cleaning up netns ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.733 [INFO][5057] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" iface="eth0" netns="" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.733 [INFO][5057] k8s.go 615: Releasing IP address(es) ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.733 [INFO][5057] utils.go 188: Calico CNI releasing IP address ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.778 [INFO][5063] ipam_plugin.go 417: Releasing address using handleID ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.778 [INFO][5063] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.778 [INFO][5063] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.823 [WARNING][5063] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.823 [INFO][5063] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.836 [INFO][5063] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:13.844703 containerd[1594]: 2024-10-08 19:49:13.841 [INFO][5057] k8s.go 621: Teardown processing complete. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:13.846454 containerd[1594]: time="2024-10-08T19:49:13.845357526Z" level=info msg="TearDown network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\" successfully" Oct 8 19:49:13.846454 containerd[1594]: time="2024-10-08T19:49:13.845388127Z" level=info msg="StopPodSandbox for \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\" returns successfully" Oct 8 19:49:13.849582 containerd[1594]: time="2024-10-08T19:49:13.849535511Z" level=info msg="RemovePodSandbox for \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\"" Oct 8 19:49:13.869225 containerd[1594]: time="2024-10-08T19:49:13.849583113Z" level=info msg="Forcibly stopping sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\"" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:13.951 [WARNING][5082] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0", GenerateName:"calico-kube-controllers-75d69567cd-", Namespace:"calico-system", SelfLink:"", UID:"bca11565-40aa-4871-8cd0-05721317a01c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d69567cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"468fb185302c68ebf69f8383b575c978381ee39cb649b1a9c6deccef35fdc823", Pod:"calico-kube-controllers-75d69567cd-r5pmq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali134de79f57f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:13.951 [INFO][5082] k8s.go 608: Cleaning up netns ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:13.951 [INFO][5082] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" iface="eth0" netns="" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:13.952 [INFO][5082] k8s.go 615: Releasing IP address(es) ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:13.952 [INFO][5082] utils.go 188: Calico CNI releasing IP address ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:14.014 [INFO][5092] ipam_plugin.go 417: Releasing address using handleID ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:14.014 [INFO][5092] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:14.014 [INFO][5092] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:14.026 [WARNING][5092] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:14.027 [INFO][5092] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" HandleID="k8s-pod-network.f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-calico--kube--controllers--75d69567cd--r5pmq-eth0" Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:14.030 [INFO][5092] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.037788 containerd[1594]: 2024-10-08 19:49:14.033 [INFO][5082] k8s.go 621: Teardown processing complete. ContainerID="f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440" Oct 8 19:49:14.038259 containerd[1594]: time="2024-10-08T19:49:14.037796131Z" level=info msg="TearDown network for sandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\" successfully" Oct 8 19:49:14.060999 containerd[1594]: time="2024-10-08T19:49:14.060833452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:14.060999 containerd[1594]: time="2024-10-08T19:49:14.060903934Z" level=info msg="RemovePodSandbox \"f276ee4da6955644d83eb7edfc12bca9a45a744f708a6a4d3a8e221e0d86a440\" returns successfully" Oct 8 19:49:14.061433 containerd[1594]: time="2024-10-08T19:49:14.061405392Z" level=info msg="StopPodSandbox for \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\"" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.104 [WARNING][5111] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21948a20-d7e6-467b-a59a-37617ee3726e", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536", Pod:"coredns-76f75df574-s4sm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52e0fef1956", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.105 [INFO][5111] k8s.go 608: Cleaning up netns ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.105 [INFO][5111] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" iface="eth0" netns="" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.105 [INFO][5111] k8s.go 615: Releasing IP address(es) ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.105 [INFO][5111] utils.go 188: Calico CNI releasing IP address ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.143 [INFO][5117] ipam_plugin.go 417: Releasing address using handleID ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.143 [INFO][5117] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.143 [INFO][5117] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.159 [WARNING][5117] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.159 [INFO][5117] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.163 [INFO][5117] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.169823 containerd[1594]: 2024-10-08 19:49:14.166 [INFO][5111] k8s.go 621: Teardown processing complete. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.169823 containerd[1594]: time="2024-10-08T19:49:14.169372983Z" level=info msg="TearDown network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\" successfully" Oct 8 19:49:14.169823 containerd[1594]: time="2024-10-08T19:49:14.169497387Z" level=info msg="StopPodSandbox for \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\" returns successfully" Oct 8 19:49:14.170789 containerd[1594]: time="2024-10-08T19:49:14.170744390Z" level=info msg="RemovePodSandbox for \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\"" Oct 8 19:49:14.170916 containerd[1594]: time="2024-10-08T19:49:14.170798152Z" level=info msg="Forcibly stopping sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\"" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.227 [WARNING][5135] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"21948a20-d7e6-467b-a59a-37617ee3726e", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"e32d8539b8bf80239fa3a3957f13eb8c981b67202d25988c71b3920851538536", Pod:"coredns-76f75df574-s4sm5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali52e0fef1956", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.227 [INFO][5135] k8s.go 608: Cleaning up netns ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.227 [INFO][5135] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" iface="eth0" netns="" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.227 [INFO][5135] k8s.go 615: Releasing IP address(es) ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.227 [INFO][5135] utils.go 188: Calico CNI releasing IP address ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.277 [INFO][5141] ipam_plugin.go 417: Releasing address using handleID ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.277 [INFO][5141] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.277 [INFO][5141] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.288 [WARNING][5141] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.288 [INFO][5141] ipam_plugin.go 445: Releasing address using workloadID ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" HandleID="k8s-pod-network.0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--s4sm5-eth0" Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.290 [INFO][5141] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.293737 containerd[1594]: 2024-10-08 19:49:14.292 [INFO][5135] k8s.go 621: Teardown processing complete. ContainerID="0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591" Oct 8 19:49:14.294123 containerd[1594]: time="2024-10-08T19:49:14.293715303Z" level=info msg="TearDown network for sandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\" successfully" Oct 8 19:49:14.299466 containerd[1594]: time="2024-10-08T19:49:14.299009687Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:14.299466 containerd[1594]: time="2024-10-08T19:49:14.299086929Z" level=info msg="RemovePodSandbox \"0c6a5972028fda18e59a41f62ace2582ff88caa7981b25e246f0c66d3bdcc591\" returns successfully" Oct 8 19:49:14.300309 containerd[1594]: time="2024-10-08T19:49:14.300058523Z" level=info msg="StopPodSandbox for \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\"" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.350 [WARNING][5159] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e5da717-f897-4c9e-a583-c20fa5c37108", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e", Pod:"csi-node-driver-2494q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19e2135d437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.351 [INFO][5159] k8s.go 608: Cleaning up netns ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.351 [INFO][5159] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" iface="eth0" netns="" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.351 [INFO][5159] k8s.go 615: Releasing IP address(es) ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.351 [INFO][5159] utils.go 188: Calico CNI releasing IP address ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.394 [INFO][5166] ipam_plugin.go 417: Releasing address using handleID ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.395 [INFO][5166] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.395 [INFO][5166] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.406 [WARNING][5166] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.406 [INFO][5166] ipam_plugin.go 445: Releasing address using workloadID ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.408 [INFO][5166] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.412281 containerd[1594]: 2024-10-08 19:49:14.410 [INFO][5159] k8s.go 621: Teardown processing complete. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.412281 containerd[1594]: time="2024-10-08T19:49:14.412159658Z" level=info msg="TearDown network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\" successfully" Oct 8 19:49:14.412281 containerd[1594]: time="2024-10-08T19:49:14.412194019Z" level=info msg="StopPodSandbox for \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\" returns successfully" Oct 8 19:49:14.412869 containerd[1594]: time="2024-10-08T19:49:14.412640074Z" level=info msg="RemovePodSandbox for \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\"" Oct 8 19:49:14.412869 containerd[1594]: time="2024-10-08T19:49:14.412796520Z" level=info msg="Forcibly stopping sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\"" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.483 [WARNING][5184] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7e5da717-f897-4c9e-a583-c20fa5c37108", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"4f5f8fa9cc4c743ca584427319ccf3e48c215465edd9529f6eae3bd875ff245e", Pod:"csi-node-driver-2494q", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali19e2135d437", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.484 [INFO][5184] k8s.go 608: Cleaning up netns ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.484 [INFO][5184] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" iface="eth0" netns="" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.484 [INFO][5184] k8s.go 615: Releasing IP address(es) ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.484 [INFO][5184] utils.go 188: Calico CNI releasing IP address ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.512 [INFO][5190] ipam_plugin.go 417: Releasing address using handleID ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.512 [INFO][5190] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.512 [INFO][5190] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.526 [WARNING][5190] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.526 [INFO][5190] ipam_plugin.go 445: Releasing address using workloadID ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" HandleID="k8s-pod-network.26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-csi--node--driver--2494q-eth0" Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.530 [INFO][5190] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.543364 containerd[1594]: 2024-10-08 19:49:14.537 [INFO][5184] k8s.go 621: Teardown processing complete. ContainerID="26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09" Oct 8 19:49:14.543364 containerd[1594]: time="2024-10-08T19:49:14.542156934Z" level=info msg="TearDown network for sandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\" successfully" Oct 8 19:49:14.554200 containerd[1594]: time="2024-10-08T19:49:14.553456847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:14.554200 containerd[1594]: time="2024-10-08T19:49:14.553542290Z" level=info msg="RemovePodSandbox \"26d97de87d8392a075fbe18b0467e0901871d9fd6a12f7e514fbf32168e44b09\" returns successfully" Oct 8 19:49:14.554200 containerd[1594]: time="2024-10-08T19:49:14.554089029Z" level=info msg="StopPodSandbox for \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\"" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.622 [WARNING][5208] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e15cac1e-985b-429c-869f-52ca0b633720", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59", Pod:"coredns-76f75df574-vwmgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71f0d4257f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.622 [INFO][5208] k8s.go 608: Cleaning up netns ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.622 [INFO][5208] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" iface="eth0" netns="" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.622 [INFO][5208] k8s.go 615: Releasing IP address(es) ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.622 [INFO][5208] utils.go 188: Calico CNI releasing IP address ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.693 [INFO][5214] ipam_plugin.go 417: Releasing address using handleID ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.693 [INFO][5214] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.693 [INFO][5214] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.706 [WARNING][5214] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.706 [INFO][5214] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.710 [INFO][5214] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.716548 containerd[1594]: 2024-10-08 19:49:14.712 [INFO][5208] k8s.go 621: Teardown processing complete. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.718638 containerd[1594]: time="2024-10-08T19:49:14.716613835Z" level=info msg="TearDown network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\" successfully" Oct 8 19:49:14.718638 containerd[1594]: time="2024-10-08T19:49:14.716640836Z" level=info msg="StopPodSandbox for \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\" returns successfully" Oct 8 19:49:14.718638 containerd[1594]: time="2024-10-08T19:49:14.717665552Z" level=info msg="RemovePodSandbox for \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\"" Oct 8 19:49:14.718638 containerd[1594]: time="2024-10-08T19:49:14.717700153Z" level=info msg="Forcibly stopping sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\"" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.780 [WARNING][5232] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e15cac1e-985b-429c-869f-52ca0b633720", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 48, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-d-c7549a9f5e", ContainerID:"c25921a3b28d8e5042f4e960c033900c0b77218ee9a87eeb6fa4aba601d39d59", Pod:"coredns-76f75df574-vwmgr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali71f0d4257f5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.783 [INFO][5232] k8s.go 608: Cleaning up netns ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.783 [INFO][5232] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" iface="eth0" netns="" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.783 [INFO][5232] k8s.go 615: Releasing IP address(es) ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.783 [INFO][5232] utils.go 188: Calico CNI releasing IP address ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.813 [INFO][5238] ipam_plugin.go 417: Releasing address using handleID ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.813 [INFO][5238] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.813 [INFO][5238] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.824 [WARNING][5238] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.824 [INFO][5238] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" HandleID="k8s-pod-network.e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Workload="ci--3975--2--2--d--c7549a9f5e-k8s-coredns--76f75df574--vwmgr-eth0" Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.826 [INFO][5238] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:49:14.829702 containerd[1594]: 2024-10-08 19:49:14.828 [INFO][5232] k8s.go 621: Teardown processing complete. ContainerID="e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0" Oct 8 19:49:14.829702 containerd[1594]: time="2024-10-08T19:49:14.829661443Z" level=info msg="TearDown network for sandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\" successfully" Oct 8 19:49:14.835642 containerd[1594]: time="2024-10-08T19:49:14.835531967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:49:14.835788 containerd[1594]: time="2024-10-08T19:49:14.835602609Z" level=info msg="RemovePodSandbox \"e127597bb433cc81fe3b4998865177297bb268dbb85caf894ab6807662f463f0\" returns successfully" Oct 8 19:49:14.880698 containerd[1594]: time="2024-10-08T19:49:14.880635334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:14.881996 containerd[1594]: time="2024-10-08T19:49:14.881906138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:49:14.883220 containerd[1594]: time="2024-10-08T19:49:14.883130901Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:14.885797 containerd[1594]: time="2024-10-08T19:49:14.885459101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:14.886735 containerd[1594]: time="2024-10-08T19:49:14.886690504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 3.449426825s" Oct 8 19:49:14.886735 containerd[1594]: time="2024-10-08T19:49:14.886732786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:49:14.892772 containerd[1594]: time="2024-10-08T19:49:14.890391713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:49:14.893342 containerd[1594]: time="2024-10-08T19:49:14.893306094Z" level=info msg="CreateContainer within sandbox \"b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:49:14.916163 containerd[1594]: time="2024-10-08T19:49:14.916103726Z" level=info msg="CreateContainer within sandbox \"b58c8645c23e4c94f4702b00d80bb378a56591b398daa2c91cfcc34cb097f872\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9d369322e2938af0ed91c7b450bb70df0b7c2bd3f0b67eb1e19d70cc17e4382f\"" Oct 8 19:49:14.919003 containerd[1594]: time="2024-10-08T19:49:14.917565097Z" level=info msg="StartContainer for \"9d369322e2938af0ed91c7b450bb70df0b7c2bd3f0b67eb1e19d70cc17e4382f\"" Oct 8 19:49:15.062503 containerd[1594]: time="2024-10-08T19:49:15.062267844Z" level=info msg="StartContainer for \"9d369322e2938af0ed91c7b450bb70df0b7c2bd3f0b67eb1e19d70cc17e4382f\" returns successfully" Oct 8 19:49:15.300959 containerd[1594]: time="2024-10-08T19:49:15.300525443Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:49:15.302585 containerd[1594]: time="2024-10-08T19:49:15.302498592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 19:49:15.304356 containerd[1594]: time="2024-10-08T19:49:15.304311775Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 413.87034ms" Oct 8 19:49:15.304356 containerd[1594]: time="2024-10-08T19:49:15.304353896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:49:15.307485 containerd[1594]: time="2024-10-08T19:49:15.307433603Z" level=info msg="CreateContainer within sandbox \"d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:49:15.333347 containerd[1594]: time="2024-10-08T19:49:15.333210179Z" level=info msg="CreateContainer within sandbox \"d1ecc3f32b350f8b69e5fae33ff904f35ee2ca165a1eb891b101648a520d701e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"714c1c4ac9119b1e47cad9dbeb30e334ffd31b22e33093fddc0bddc0c06fe2f0\"" Oct 8 19:49:15.339141 containerd[1594]: time="2024-10-08T19:49:15.338703010Z" level=info msg="StartContainer for \"714c1c4ac9119b1e47cad9dbeb30e334ffd31b22e33093fddc0bddc0c06fe2f0\"" Oct 8 19:49:15.434453 containerd[1594]: time="2024-10-08T19:49:15.434256930Z" level=info msg="StartContainer for \"714c1c4ac9119b1e47cad9dbeb30e334ffd31b22e33093fddc0bddc0c06fe2f0\" returns successfully" Oct 8 19:49:16.104988 kubelet[2985]: I1008 19:49:16.102032 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7758dc7779-ngz5t" podStartSLOduration=3.269325071 podStartE2EDuration="7.101598718s" podCreationTimestamp="2024-10-08 19:49:09 +0000 UTC" firstStartedPulling="2024-10-08 19:49:11.472556466 +0000 UTC m=+57.959117649" lastFinishedPulling="2024-10-08 19:49:15.304830033 +0000 UTC m=+61.791391296" observedRunningTime="2024-10-08 19:49:16.085008861 +0000 UTC m=+62.571570084" watchObservedRunningTime="2024-10-08 19:49:16.101598718 +0000 UTC m=+62.588159941" Oct 8 19:49:16.104988 kubelet[2985]: I1008 19:49:16.102557 2985 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7758dc7779-trrsg" podStartSLOduration=3.649190589 podStartE2EDuration="7.10253403s" podCreationTimestamp="2024-10-08 19:49:09 +0000 UTC" firstStartedPulling="2024-10-08 19:49:11.433962085 +0000 UTC m=+57.920523268" lastFinishedPulling="2024-10-08 19:49:14.887305446 +0000 UTC m=+61.373866709" observedRunningTime="2024-10-08 19:49:16.101521315 +0000 UTC m=+62.588082538" watchObservedRunningTime="2024-10-08 19:49:16.10253403 +0000 UTC m=+62.589095213" Oct 8 19:49:17.071292 kubelet[2985]: I1008 19:49:17.071239 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:17.071678 kubelet[2985]: I1008 19:49:17.071653 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:18.225782 systemd[1]: Started sshd@9-168.119.51.132:22-194.169.175.38:27986.service - OpenSSH per-connection server daemon (194.169.175.38:27986). Oct 8 19:49:18.913767 sshd[5348]: Invalid user pi from 194.169.175.38 port 27986 Oct 8 19:49:19.374899 sshd[5348]: Connection closed by invalid user pi 194.169.175.38 port 27986 [preauth] Oct 8 19:49:19.377081 systemd[1]: sshd@9-168.119.51.132:22-194.169.175.38:27986.service: Deactivated successfully. Oct 8 19:49:41.502567 kubelet[2985]: I1008 19:49:41.502425 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:49:43.779666 kubelet[2985]: I1008 19:49:43.779313 2985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:50:53.860982 systemd[1]: Started sshd@10-168.119.51.132:22-191.35.128.135:38196.service - OpenSSH per-connection server daemon (191.35.128.135:38196). Oct 8 19:50:55.303552 sshd[5590]: Received disconnect from 191.35.128.135 port 38196:11: Bye Bye [preauth] Oct 8 19:50:55.303552 sshd[5590]: Disconnected from authenticating user root 191.35.128.135 port 38196 [preauth] Oct 8 19:50:55.306544 systemd[1]: sshd@10-168.119.51.132:22-191.35.128.135:38196.service: Deactivated successfully. Oct 8 19:51:15.509668 systemd[1]: Started sshd@11-168.119.51.132:22-60.50.226.100:52576.service - OpenSSH per-connection server daemon (60.50.226.100:52576). Oct 8 19:51:16.997469 sshd[5671]: Received disconnect from 60.50.226.100 port 52576:11: Bye Bye [preauth] Oct 8 19:51:16.997469 sshd[5671]: Disconnected from authenticating user root 60.50.226.100 port 52576 [preauth] Oct 8 19:51:17.001958 systemd[1]: sshd@11-168.119.51.132:22-60.50.226.100:52576.service: Deactivated successfully. Oct 8 19:52:29.592555 systemd[1]: Started sshd@12-168.119.51.132:22-183.249.84.29:36458.service - OpenSSH per-connection server daemon (183.249.84.29:36458). Oct 8 19:52:56.126923 systemd[1]: run-containerd-runc-k8s.io-dcd8253e8fcdcc5c4271cdfd4bd08035678ab4055fd6241a20bc380109add37f-runc.tGPHDK.mount: Deactivated successfully. Oct 8 19:53:04.231666 systemd[1]: Started sshd@13-168.119.51.132:22-139.178.89.65:47396.service - OpenSSH per-connection server daemon (139.178.89.65:47396). Oct 8 19:53:05.205563 sshd[5931]: Accepted publickey for core from 139.178.89.65 port 47396 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:05.208613 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:05.215138 systemd-logind[1566]: New session 8 of user core. Oct 8 19:53:05.221658 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:53:05.969684 sshd[5931]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:05.976061 systemd[1]: sshd@13-168.119.51.132:22-139.178.89.65:47396.service: Deactivated successfully. Oct 8 19:53:05.982032 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:53:05.983397 systemd-logind[1566]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:53:05.984501 systemd-logind[1566]: Removed session 8. Oct 8 19:53:11.142837 systemd[1]: Started sshd@14-168.119.51.132:22-139.178.89.65:35872.service - OpenSSH per-connection server daemon (139.178.89.65:35872). Oct 8 19:53:12.141035 sshd[5968]: Accepted publickey for core from 139.178.89.65 port 35872 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:12.143744 sshd[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:12.150012 systemd-logind[1566]: New session 9 of user core. Oct 8 19:53:12.154639 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:53:12.934695 sshd[5968]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:12.939524 systemd-logind[1566]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:53:12.939859 systemd[1]: sshd@14-168.119.51.132:22-139.178.89.65:35872.service: Deactivated successfully. Oct 8 19:53:12.945083 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:53:12.946976 systemd-logind[1566]: Removed session 9. Oct 8 19:53:13.278637 systemd[1]: Started sshd@15-168.119.51.132:22-202.157.186.116:54364.service - OpenSSH per-connection server daemon (202.157.186.116:54364). Oct 8 19:53:14.505970 sshd[5983]: Received disconnect from 202.157.186.116 port 54364:11: Bye Bye [preauth] Oct 8 19:53:14.505970 sshd[5983]: Disconnected from authenticating user root 202.157.186.116 port 54364 [preauth] Oct 8 19:53:14.509815 systemd[1]: sshd@15-168.119.51.132:22-202.157.186.116:54364.service: Deactivated successfully. Oct 8 19:53:18.105037 systemd[1]: Started sshd@16-168.119.51.132:22-139.178.89.65:53972.service - OpenSSH per-connection server daemon (139.178.89.65:53972). Oct 8 19:53:18.857668 systemd[1]: Started sshd@17-168.119.51.132:22-191.35.128.135:50997.service - OpenSSH per-connection server daemon (191.35.128.135:50997). Oct 8 19:53:19.126344 sshd[6014]: Accepted publickey for core from 139.178.89.65 port 53972 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:19.129547 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:19.135577 systemd-logind[1566]: New session 10 of user core. Oct 8 19:53:19.146687 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:53:19.907131 sshd[6014]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:19.914521 systemd[1]: sshd@16-168.119.51.132:22-139.178.89.65:53972.service: Deactivated successfully. Oct 8 19:53:19.922022 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:53:19.923714 systemd-logind[1566]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:53:19.924872 systemd-logind[1566]: Removed session 10. Oct 8 19:53:20.065739 systemd[1]: Started sshd@18-168.119.51.132:22-139.178.89.65:53978.service - OpenSSH per-connection server daemon (139.178.89.65:53978). Oct 8 19:53:20.268455 sshd[6016]: Received disconnect from 191.35.128.135 port 50997:11: Bye Bye [preauth] Oct 8 19:53:20.268455 sshd[6016]: Disconnected from authenticating user root 191.35.128.135 port 50997 [preauth] Oct 8 19:53:20.272620 systemd[1]: sshd@17-168.119.51.132:22-191.35.128.135:50997.service: Deactivated successfully. Oct 8 19:53:21.053637 sshd[6031]: Accepted publickey for core from 139.178.89.65 port 53978 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:21.055795 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:21.061982 systemd-logind[1566]: New session 11 of user core. Oct 8 19:53:21.068724 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:53:21.834132 sshd[6031]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:21.839748 systemd-logind[1566]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:53:21.841167 systemd[1]: sshd@18-168.119.51.132:22-139.178.89.65:53978.service: Deactivated successfully. Oct 8 19:53:21.849067 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:53:21.852151 systemd-logind[1566]: Removed session 11. Oct 8 19:53:22.005715 systemd[1]: Started sshd@19-168.119.51.132:22-139.178.89.65:53982.service - OpenSSH per-connection server daemon (139.178.89.65:53982). Oct 8 19:53:22.995435 sshd[6052]: Accepted publickey for core from 139.178.89.65 port 53982 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:22.997818 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:23.003434 systemd-logind[1566]: New session 12 of user core. Oct 8 19:53:23.008603 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:53:23.752632 sshd[6052]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:23.757209 systemd[1]: sshd@19-168.119.51.132:22-139.178.89.65:53982.service: Deactivated successfully. Oct 8 19:53:23.764678 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:53:23.766339 systemd-logind[1566]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:53:23.768658 systemd-logind[1566]: Removed session 12. Oct 8 19:53:27.282787 systemd[1]: Started sshd@20-168.119.51.132:22-60.50.226.100:48294.service - OpenSSH per-connection server daemon (60.50.226.100:48294). Oct 8 19:53:28.917614 systemd[1]: Started sshd@21-168.119.51.132:22-139.178.89.65:43620.service - OpenSSH per-connection server daemon (139.178.89.65:43620). Oct 8 19:53:29.427963 sshd[6071]: Received disconnect from 60.50.226.100 port 48294:11: Bye Bye [preauth] Oct 8 19:53:29.427963 sshd[6071]: Disconnected from authenticating user root 60.50.226.100 port 48294 [preauth] Oct 8 19:53:29.431867 systemd[1]: sshd@20-168.119.51.132:22-60.50.226.100:48294.service: Deactivated successfully. Oct 8 19:53:29.914551 sshd[6075]: Accepted publickey for core from 139.178.89.65 port 43620 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:29.916391 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:29.922549 systemd-logind[1566]: New session 13 of user core. Oct 8 19:53:29.927728 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:53:30.669641 sshd[6075]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:30.674965 systemd[1]: sshd@21-168.119.51.132:22-139.178.89.65:43620.service: Deactivated successfully. Oct 8 19:53:30.680809 systemd-logind[1566]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:53:30.681849 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:53:30.685166 systemd-logind[1566]: Removed session 13. Oct 8 19:53:30.830803 systemd[1]: Started sshd@22-168.119.51.132:22-139.178.89.65:43626.service - OpenSSH per-connection server daemon (139.178.89.65:43626). Oct 8 19:53:31.790386 sshd[6092]: Accepted publickey for core from 139.178.89.65 port 43626 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:31.794450 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:31.802892 systemd-logind[1566]: New session 14 of user core. Oct 8 19:53:31.808585 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:53:32.724818 sshd[6092]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:32.732656 systemd[1]: sshd@22-168.119.51.132:22-139.178.89.65:43626.service: Deactivated successfully. Oct 8 19:53:32.737018 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:53:32.739179 systemd-logind[1566]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:53:32.740657 systemd-logind[1566]: Removed session 14. Oct 8 19:53:32.894732 systemd[1]: Started sshd@23-168.119.51.132:22-139.178.89.65:43628.service - OpenSSH per-connection server daemon (139.178.89.65:43628). Oct 8 19:53:33.892468 sshd[6104]: Accepted publickey for core from 139.178.89.65 port 43628 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:33.895256 sshd[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:33.900410 systemd-logind[1566]: New session 15 of user core. Oct 8 19:53:33.905762 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:53:36.507418 sshd[6104]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:36.513649 systemd[1]: sshd@23-168.119.51.132:22-139.178.89.65:43628.service: Deactivated successfully. Oct 8 19:53:36.518045 systemd-logind[1566]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:53:36.518442 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:53:36.521849 systemd-logind[1566]: Removed session 15. Oct 8 19:53:36.671658 systemd[1]: Started sshd@24-168.119.51.132:22-139.178.89.65:56408.service - OpenSSH per-connection server daemon (139.178.89.65:56408). Oct 8 19:53:37.248418 systemd[1]: run-containerd-runc-k8s.io-e0bd6da19a595f685aa8224cca0d0b61b0519b3d5c2e46e5ff74844b8f3bd1d7-runc.WbNHVY.mount: Deactivated successfully. Oct 8 19:53:37.663778 sshd[6140]: Accepted publickey for core from 139.178.89.65 port 56408 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:37.666088 sshd[6140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:37.674010 systemd-logind[1566]: New session 16 of user core. Oct 8 19:53:37.681833 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:53:38.593591 sshd[6140]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:38.602316 systemd[1]: sshd@24-168.119.51.132:22-139.178.89.65:56408.service: Deactivated successfully. Oct 8 19:53:38.610124 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:53:38.611481 systemd-logind[1566]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:53:38.613073 systemd-logind[1566]: Removed session 16. Oct 8 19:53:38.755611 systemd[1]: Started sshd@25-168.119.51.132:22-139.178.89.65:56412.service - OpenSSH per-connection server daemon (139.178.89.65:56412). Oct 8 19:53:39.715964 sshd[6174]: Accepted publickey for core from 139.178.89.65 port 56412 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:39.717976 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:39.725485 systemd-logind[1566]: New session 17 of user core. Oct 8 19:53:39.733720 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:53:40.450018 sshd[6174]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:40.455572 systemd[1]: sshd@25-168.119.51.132:22-139.178.89.65:56412.service: Deactivated successfully. Oct 8 19:53:40.457391 systemd-logind[1566]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:53:40.464253 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:53:40.466949 systemd-logind[1566]: Removed session 17. Oct 8 19:53:45.622878 systemd[1]: Started sshd@26-168.119.51.132:22-139.178.89.65:41740.service - OpenSSH per-connection server daemon (139.178.89.65:41740). Oct 8 19:53:46.607784 sshd[6215]: Accepted publickey for core from 139.178.89.65 port 41740 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:46.610055 sshd[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:46.616855 systemd-logind[1566]: New session 18 of user core. Oct 8 19:53:46.626792 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:53:47.368874 sshd[6215]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:47.375139 systemd-logind[1566]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:53:47.377676 systemd[1]: sshd@26-168.119.51.132:22-139.178.89.65:41740.service: Deactivated successfully. Oct 8 19:53:47.383590 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:53:47.384886 systemd-logind[1566]: Removed session 18. Oct 8 19:53:52.536012 systemd[1]: Started sshd@27-168.119.51.132:22-139.178.89.65:41754.service - OpenSSH per-connection server daemon (139.178.89.65:41754). Oct 8 19:53:53.546436 sshd[6229]: Accepted publickey for core from 139.178.89.65 port 41754 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:53:53.554145 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:53:53.563625 systemd-logind[1566]: New session 19 of user core. Oct 8 19:53:53.573088 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:53:54.312605 sshd[6229]: pam_unix(sshd:session): session closed for user core Oct 8 19:53:54.319181 systemd[1]: sshd@27-168.119.51.132:22-139.178.89.65:41754.service: Deactivated successfully. Oct 8 19:53:54.323619 systemd-logind[1566]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:53:54.324536 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:53:54.326145 systemd-logind[1566]: Removed session 19. Oct 8 19:53:59.973691 systemd[1]: Started sshd@28-168.119.51.132:22-202.157.186.116:38408.service - OpenSSH per-connection server daemon (202.157.186.116:38408). Oct 8 19:54:01.171215 sshd[6269]: Received disconnect from 202.157.186.116 port 38408:11: Bye Bye [preauth] Oct 8 19:54:01.171215 sshd[6269]: Disconnected from authenticating user root 202.157.186.116 port 38408 [preauth] Oct 8 19:54:01.174237 systemd[1]: sshd@28-168.119.51.132:22-202.157.186.116:38408.service: Deactivated successfully. Oct 8 19:54:07.366771 systemd[1]: Started sshd@29-168.119.51.132:22-191.35.128.135:57172.service - OpenSSH per-connection server daemon (191.35.128.135:57172). Oct 8 19:54:08.846449 sshd[6303]: Received disconnect from 191.35.128.135 port 57172:11: Bye Bye [preauth] Oct 8 19:54:08.846449 sshd[6303]: Disconnected from authenticating user root 191.35.128.135 port 57172 [preauth] Oct 8 19:54:08.849156 systemd[1]: sshd@29-168.119.51.132:22-191.35.128.135:57172.service: Deactivated successfully. Oct 8 19:54:14.585256 systemd[1]: Started sshd@30-168.119.51.132:22-60.50.226.100:60508.service - OpenSSH per-connection server daemon (60.50.226.100:60508). Oct 8 19:54:17.343778 sshd[6317]: Received disconnect from 60.50.226.100 port 60508:11: Bye Bye [preauth] Oct 8 19:54:17.343778 sshd[6317]: Disconnected from authenticating user root 60.50.226.100 port 60508 [preauth] Oct 8 19:54:17.347486 systemd[1]: sshd@30-168.119.51.132:22-60.50.226.100:60508.service: Deactivated successfully.