Oct 9 01:07:49.913742 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 9 01:07:49.913763 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 23:34:40 -00 2024 Oct 9 01:07:49.913773 kernel: KASLR enabled Oct 9 01:07:49.913778 kernel: efi: EFI v2.7 by EDK II Oct 9 01:07:49.913784 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Oct 9 01:07:49.913789 kernel: random: crng init done Oct 9 01:07:49.913796 kernel: secureboot: Secure boot disabled Oct 9 01:07:49.913802 kernel: ACPI: Early table checksum verification disabled Oct 9 01:07:49.913808 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Oct 9 01:07:49.913816 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 9 01:07:49.913821 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913827 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913833 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913839 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913847 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913854 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913861 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913867 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913873 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:07:49.913879 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 9 01:07:49.913885 kernel: NUMA: Failed to initialise from firmware Oct 9 01:07:49.913891 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:07:49.913898 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Oct 9 01:07:49.913904 kernel: Zone ranges: Oct 9 01:07:49.913910 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:07:49.913917 kernel: DMA32 empty Oct 9 01:07:49.913923 kernel: Normal empty Oct 9 01:07:49.913929 kernel: Movable zone start for each node Oct 9 01:07:49.913935 kernel: Early memory node ranges Oct 9 01:07:49.913941 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Oct 9 01:07:49.913947 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 9 01:07:49.913953 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 9 01:07:49.913960 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 9 01:07:49.913966 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 9 01:07:49.913972 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 9 01:07:49.913978 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 9 01:07:49.913984 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 9 01:07:49.913991 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 9 01:07:49.913998 kernel: psci: probing for conduit method from ACPI. Oct 9 01:07:49.914004 kernel: psci: PSCIv1.1 detected in firmware. Oct 9 01:07:49.914013 kernel: psci: Using standard PSCI v0.2 function IDs Oct 9 01:07:49.914020 kernel: psci: Trusted OS migration not required Oct 9 01:07:49.914026 kernel: psci: SMC Calling Convention v1.1 Oct 9 01:07:49.914034 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 9 01:07:49.914041 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 9 01:07:49.914048 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 9 01:07:49.914109 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 9 01:07:49.914118 kernel: Detected PIPT I-cache on CPU0 Oct 9 01:07:49.914124 kernel: CPU features: detected: GIC system register CPU interface Oct 9 01:07:49.914131 kernel: CPU features: detected: Hardware dirty bit management Oct 9 01:07:49.914137 kernel: CPU features: detected: Spectre-v4 Oct 9 01:07:49.914144 kernel: CPU features: detected: Spectre-BHB Oct 9 01:07:49.914151 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 9 01:07:49.914160 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 9 01:07:49.914166 kernel: CPU features: detected: ARM erratum 1418040 Oct 9 01:07:49.914173 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 9 01:07:49.914179 kernel: alternatives: applying boot alternatives Oct 9 01:07:49.914187 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:07:49.914194 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:07:49.914201 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 9 01:07:49.914207 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:07:49.914214 kernel: Fallback order for Node 0: 0 Oct 9 01:07:49.914221 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 9 01:07:49.914227 kernel: Policy zone: DMA Oct 9 01:07:49.914235 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:07:49.914241 kernel: software IO TLB: area num 4. Oct 9 01:07:49.914248 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 9 01:07:49.914255 kernel: Memory: 2386400K/2572288K available (10240K kernel code, 2184K rwdata, 8092K rodata, 39552K init, 897K bss, 185888K reserved, 0K cma-reserved) Oct 9 01:07:49.914262 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 9 01:07:49.914268 kernel: trace event string verifier disabled Oct 9 01:07:49.914275 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:07:49.914282 kernel: rcu: RCU event tracing is enabled. Oct 9 01:07:49.914289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 9 01:07:49.914296 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:07:49.914308 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:07:49.914317 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:07:49.914326 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 9 01:07:49.914332 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 9 01:07:49.914339 kernel: GICv3: 256 SPIs implemented Oct 9 01:07:49.914345 kernel: GICv3: 0 Extended SPIs implemented Oct 9 01:07:49.914352 kernel: Root IRQ handler: gic_handle_irq Oct 9 01:07:49.914359 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 9 01:07:49.914365 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 9 01:07:49.914372 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 9 01:07:49.914378 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Oct 9 01:07:49.914385 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Oct 9 01:07:49.914392 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 9 01:07:49.914400 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 9 01:07:49.914406 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:07:49.914413 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:07:49.914420 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 9 01:07:49.914427 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 9 01:07:49.914434 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 9 01:07:49.914440 kernel: arm-pv: using stolen time PV Oct 9 01:07:49.914448 kernel: Console: colour dummy device 80x25 Oct 9 01:07:49.914454 kernel: ACPI: Core revision 20230628 Oct 9 01:07:49.914461 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 9 01:07:49.914468 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:07:49.914477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:07:49.914484 kernel: landlock: Up and running. Oct 9 01:07:49.914490 kernel: SELinux: Initializing. Oct 9 01:07:49.914497 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:07:49.914504 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 9 01:07:49.914511 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:07:49.914518 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:07:49.914525 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:07:49.914532 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:07:49.914540 kernel: Platform MSI: ITS@0x8080000 domain created Oct 9 01:07:49.914547 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 9 01:07:49.914554 kernel: Remapping and enabling EFI services. Oct 9 01:07:49.914561 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:07:49.914568 kernel: Detected PIPT I-cache on CPU1 Oct 9 01:07:49.914575 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 9 01:07:49.914582 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 9 01:07:49.914589 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:07:49.914595 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 9 01:07:49.914603 kernel: Detected PIPT I-cache on CPU2 Oct 9 01:07:49.914610 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 9 01:07:49.914621 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 9 01:07:49.914630 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:07:49.914637 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 9 01:07:49.914644 kernel: Detected PIPT I-cache on CPU3 Oct 9 01:07:49.914651 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 9 01:07:49.914658 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 9 01:07:49.914665 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 9 01:07:49.914674 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 9 01:07:49.914681 kernel: smp: Brought up 1 node, 4 CPUs Oct 9 01:07:49.914688 kernel: SMP: Total of 4 processors activated. Oct 9 01:07:49.914695 kernel: CPU features: detected: 32-bit EL0 Support Oct 9 01:07:49.914703 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 9 01:07:49.914710 kernel: CPU features: detected: Common not Private translations Oct 9 01:07:49.914717 kernel: CPU features: detected: CRC32 instructions Oct 9 01:07:49.914724 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 9 01:07:49.914732 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 9 01:07:49.914739 kernel: CPU features: detected: LSE atomic instructions Oct 9 01:07:49.914746 kernel: CPU features: detected: Privileged Access Never Oct 9 01:07:49.914753 kernel: CPU features: detected: RAS Extension Support Oct 9 01:07:49.914761 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 9 01:07:49.914768 kernel: CPU: All CPU(s) started at EL1 Oct 9 01:07:49.914775 kernel: alternatives: applying system-wide alternatives Oct 9 01:07:49.914782 kernel: devtmpfs: initialized Oct 9 01:07:49.914789 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:07:49.914798 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 9 01:07:49.914805 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:07:49.914812 kernel: SMBIOS 3.0.0 present. Oct 9 01:07:49.914819 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Oct 9 01:07:49.914826 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:07:49.914833 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 9 01:07:49.914841 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 9 01:07:49.914848 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 9 01:07:49.914855 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:07:49.914864 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Oct 9 01:07:49.914871 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:07:49.914878 kernel: cpuidle: using governor menu Oct 9 01:07:49.914885 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 9 01:07:49.914892 kernel: ASID allocator initialised with 32768 entries Oct 9 01:07:49.914900 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:07:49.914907 kernel: Serial: AMBA PL011 UART driver Oct 9 01:07:49.914914 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 9 01:07:49.914921 kernel: Modules: 0 pages in range for non-PLT usage Oct 9 01:07:49.914929 kernel: Modules: 508992 pages in range for PLT usage Oct 9 01:07:49.914937 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:07:49.914944 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:07:49.914951 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 9 01:07:49.914958 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 9 01:07:49.914965 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:07:49.914972 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:07:49.914979 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 9 01:07:49.914987 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 9 01:07:49.914995 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:07:49.915002 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:07:49.915009 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:07:49.915017 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:07:49.915024 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:07:49.915031 kernel: ACPI: Interpreter enabled Oct 9 01:07:49.915037 kernel: ACPI: Using GIC for interrupt routing Oct 9 01:07:49.915044 kernel: ACPI: MCFG table detected, 1 entries Oct 9 01:07:49.915052 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 9 01:07:49.915065 kernel: printk: console [ttyAMA0] enabled Oct 9 01:07:49.915074 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:07:49.915208 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:07:49.915282 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 9 01:07:49.915358 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 9 01:07:49.915423 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 9 01:07:49.915487 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 9 01:07:49.915497 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 9 01:07:49.915507 kernel: PCI host bridge to bus 0000:00 Oct 9 01:07:49.915580 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 9 01:07:49.915639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 9 01:07:49.915695 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 9 01:07:49.915753 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:07:49.915832 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 9 01:07:49.915926 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 9 01:07:49.915995 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 9 01:07:49.916074 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 9 01:07:49.916143 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:07:49.916208 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 9 01:07:49.916274 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 9 01:07:49.916354 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 9 01:07:49.916425 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 9 01:07:49.916485 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 9 01:07:49.916546 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 9 01:07:49.916555 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 9 01:07:49.916563 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 9 01:07:49.916570 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 9 01:07:49.916577 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 9 01:07:49.916584 kernel: iommu: Default domain type: Translated Oct 9 01:07:49.916594 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 9 01:07:49.916601 kernel: efivars: Registered efivars operations Oct 9 01:07:49.916608 kernel: vgaarb: loaded Oct 9 01:07:49.916615 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 9 01:07:49.916622 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:07:49.916630 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:07:49.916637 kernel: pnp: PnP ACPI init Oct 9 01:07:49.916709 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 9 01:07:49.916722 kernel: pnp: PnP ACPI: found 1 devices Oct 9 01:07:49.916729 kernel: NET: Registered PF_INET protocol family Oct 9 01:07:49.916737 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 9 01:07:49.916745 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 9 01:07:49.916752 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:07:49.916759 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:07:49.916766 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 9 01:07:49.916773 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 9 01:07:49.916781 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:07:49.916790 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 9 01:07:49.916797 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:07:49.916804 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:07:49.916812 kernel: kvm [1]: HYP mode not available Oct 9 01:07:49.916819 kernel: Initialise system trusted keyrings Oct 9 01:07:49.916826 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 9 01:07:49.916834 kernel: Key type asymmetric registered Oct 9 01:07:49.916841 kernel: Asymmetric key parser 'x509' registered Oct 9 01:07:49.916848 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 9 01:07:49.916857 kernel: io scheduler mq-deadline registered Oct 9 01:07:49.916864 kernel: io scheduler kyber registered Oct 9 01:07:49.916871 kernel: io scheduler bfq registered Oct 9 01:07:49.916879 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 9 01:07:49.916886 kernel: ACPI: button: Power Button [PWRB] Oct 9 01:07:49.916894 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 9 01:07:49.916958 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 9 01:07:49.916968 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:07:49.916975 kernel: thunder_xcv, ver 1.0 Oct 9 01:07:49.916982 kernel: thunder_bgx, ver 1.0 Oct 9 01:07:49.916992 kernel: nicpf, ver 1.0 Oct 9 01:07:49.916999 kernel: nicvf, ver 1.0 Oct 9 01:07:49.917104 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 9 01:07:49.917174 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-09T01:07:49 UTC (1728436069) Oct 9 01:07:49.917184 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 01:07:49.917191 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 9 01:07:49.917199 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 9 01:07:49.917220 kernel: watchdog: Hard watchdog permanently disabled Oct 9 01:07:49.917228 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:07:49.917236 kernel: Segment Routing with IPv6 Oct 9 01:07:49.917243 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:07:49.917250 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:07:49.917259 kernel: Key type dns_resolver registered Oct 9 01:07:49.917267 kernel: registered taskstats version 1 Oct 9 01:07:49.917274 kernel: Loading compiled-in X.509 certificates Oct 9 01:07:49.917281 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 80611b0a9480eaf6d787b908c6349fdb5d07fa81' Oct 9 01:07:49.917288 kernel: Key type .fscrypt registered Oct 9 01:07:49.917297 kernel: Key type fscrypt-provisioning registered Oct 9 01:07:49.917310 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:07:49.917318 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:07:49.917325 kernel: ima: No architecture policies found Oct 9 01:07:49.917333 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 9 01:07:49.917340 kernel: clk: Disabling unused clocks Oct 9 01:07:49.917347 kernel: Freeing unused kernel memory: 39552K Oct 9 01:07:49.917354 kernel: Run /init as init process Oct 9 01:07:49.917363 kernel: with arguments: Oct 9 01:07:49.917370 kernel: /init Oct 9 01:07:49.917377 kernel: with environment: Oct 9 01:07:49.917384 kernel: HOME=/ Oct 9 01:07:49.917391 kernel: TERM=linux Oct 9 01:07:49.917398 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:07:49.917407 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:07:49.917416 systemd[1]: Detected virtualization kvm. Oct 9 01:07:49.917426 systemd[1]: Detected architecture arm64. Oct 9 01:07:49.917433 systemd[1]: Running in initrd. Oct 9 01:07:49.917441 systemd[1]: No hostname configured, using default hostname. Oct 9 01:07:49.917448 systemd[1]: Hostname set to . Oct 9 01:07:49.917456 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:07:49.917463 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:07:49.917471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:07:49.917479 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:07:49.917488 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:07:49.917497 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:07:49.917504 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:07:49.917513 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:07:49.917522 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:07:49.917531 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:07:49.917539 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:07:49.917548 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:07:49.917555 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:07:49.917563 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:07:49.917583 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:07:49.917591 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:07:49.917599 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:07:49.917607 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:07:49.917615 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:07:49.917624 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:07:49.917632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:07:49.917640 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:07:49.917647 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:07:49.917655 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:07:49.917663 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:07:49.917671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:07:49.917679 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:07:49.917687 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:07:49.917696 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:07:49.917704 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:07:49.917712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:49.917720 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:07:49.917727 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:07:49.917736 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:07:49.917763 systemd-journald[238]: Collecting audit messages is disabled. Oct 9 01:07:49.917783 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:07:49.917793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:49.917801 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:07:49.917810 systemd-journald[238]: Journal started Oct 9 01:07:49.917828 systemd-journald[238]: Runtime Journal (/run/log/journal/36bb054e04a84d52ae4cc1fc7cf8b907) is 5.9M, max 47.3M, 41.4M free. Oct 9 01:07:49.901320 systemd-modules-load[239]: Inserted module 'overlay' Oct 9 01:07:49.921070 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:07:49.922651 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:07:49.923144 systemd-modules-load[239]: Inserted module 'br_netfilter' Oct 9 01:07:49.923611 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:07:49.925716 kernel: Bridge firewalling registered Oct 9 01:07:49.924931 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:07:49.929704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:07:49.931085 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:07:49.932513 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:07:49.941506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:07:49.942824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:07:49.945980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:07:49.955280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:07:49.956237 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:49.958616 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:07:49.972752 dracut-cmdline[282]: dracut-dracut-053 Oct 9 01:07:49.975383 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d2d67b5440410ae2d0aa86eba97891969be0a7a421fa55f13442706ef7ed2a5e Oct 9 01:07:49.981747 systemd-resolved[278]: Positive Trust Anchors: Oct 9 01:07:49.981818 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:07:49.981847 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:07:49.986430 systemd-resolved[278]: Defaulting to hostname 'linux'. Oct 9 01:07:49.987371 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:07:49.990674 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:07:50.050092 kernel: SCSI subsystem initialized Oct 9 01:07:50.055077 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:07:50.064107 kernel: iscsi: registered transport (tcp) Oct 9 01:07:50.075349 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:07:50.075366 kernel: QLogic iSCSI HBA Driver Oct 9 01:07:50.116670 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:07:50.127219 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:07:50.142834 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:07:50.142913 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:07:50.142926 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:07:50.190080 kernel: raid6: neonx8 gen() 15666 MB/s Oct 9 01:07:50.207068 kernel: raid6: neonx4 gen() 15543 MB/s Oct 9 01:07:50.224067 kernel: raid6: neonx2 gen() 13167 MB/s Oct 9 01:07:50.241067 kernel: raid6: neonx1 gen() 10467 MB/s Oct 9 01:07:50.258072 kernel: raid6: int64x8 gen() 6134 MB/s Oct 9 01:07:50.275069 kernel: raid6: int64x4 gen() 7330 MB/s Oct 9 01:07:50.292070 kernel: raid6: int64x2 gen() 6099 MB/s Oct 9 01:07:50.309071 kernel: raid6: int64x1 gen() 5039 MB/s Oct 9 01:07:50.309095 kernel: raid6: using algorithm neonx8 gen() 15666 MB/s Oct 9 01:07:50.326078 kernel: raid6: .... xor() 11874 MB/s, rmw enabled Oct 9 01:07:50.326091 kernel: raid6: using neon recovery algorithm Oct 9 01:07:50.331073 kernel: xor: measuring software checksum speed Oct 9 01:07:50.331087 kernel: 8regs : 19707 MB/sec Oct 9 01:07:50.332502 kernel: 32regs : 18357 MB/sec Oct 9 01:07:50.332515 kernel: arm64_neon : 26910 MB/sec Oct 9 01:07:50.332529 kernel: xor: using function: arm64_neon (26910 MB/sec) Oct 9 01:07:50.384088 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:07:50.394179 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:07:50.406267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:07:50.416739 systemd-udevd[464]: Using default interface naming scheme 'v255'. Oct 9 01:07:50.419842 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:07:50.426182 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:07:50.437203 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Oct 9 01:07:50.461608 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:07:50.476201 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:07:50.513635 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:07:50.521204 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:07:50.532514 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:07:50.534488 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:07:50.536974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:07:50.538091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:07:50.550229 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:07:50.559147 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 9 01:07:50.560390 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 9 01:07:50.563430 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:07:50.563447 kernel: GPT:9289727 != 19775487 Oct 9 01:07:50.563457 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:07:50.563466 kernel: GPT:9289727 != 19775487 Oct 9 01:07:50.563481 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:07:50.563491 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:07:50.562249 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:07:50.572319 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:07:50.572439 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:50.576134 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Oct 9 01:07:50.578073 kernel: BTRFS: device fsid c25b3a2f-539f-42a7-8842-97b35e474647 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (510) Oct 9 01:07:50.578109 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:07:50.578883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:07:50.579083 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:50.580560 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:50.593313 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:50.600805 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 9 01:07:50.602465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:50.609693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 9 01:07:50.616526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:07:50.619985 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 9 01:07:50.620940 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 9 01:07:50.636240 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:07:50.637683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:07:50.642322 disk-uuid[552]: Primary Header is updated. Oct 9 01:07:50.642322 disk-uuid[552]: Secondary Entries is updated. Oct 9 01:07:50.642322 disk-uuid[552]: Secondary Header is updated. Oct 9 01:07:50.647099 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:07:50.659231 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:51.654082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 9 01:07:51.654507 disk-uuid[553]: The operation has completed successfully. Oct 9 01:07:51.677423 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:07:51.677524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:07:51.696275 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:07:51.699911 sh[577]: Success Oct 9 01:07:51.712102 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 9 01:07:51.736761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:07:51.744251 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:07:51.748093 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:07:51.756648 kernel: BTRFS info (device dm-0): first mount of filesystem c25b3a2f-539f-42a7-8842-97b35e474647 Oct 9 01:07:51.756683 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:07:51.756694 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:07:51.758504 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:07:51.758518 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:07:51.761920 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:07:51.762998 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:07:51.763696 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:07:51.765655 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:07:51.775670 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:07:51.775720 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:07:51.775733 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:07:51.778096 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:07:51.784067 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:07:51.785484 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:07:51.790765 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:07:51.797201 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:07:51.857481 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:07:51.865185 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:07:51.893841 systemd-networkd[768]: lo: Link UP Oct 9 01:07:51.893853 systemd-networkd[768]: lo: Gained carrier Oct 9 01:07:51.894645 systemd-networkd[768]: Enumeration completed Oct 9 01:07:51.894722 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:07:51.895778 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:51.895782 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:07:51.896045 systemd[1]: Reached target network.target - Network. Oct 9 01:07:51.896777 systemd-networkd[768]: eth0: Link UP Oct 9 01:07:51.896780 systemd-networkd[768]: eth0: Gained carrier Oct 9 01:07:51.896787 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:51.903548 ignition[672]: Ignition 2.19.0 Oct 9 01:07:51.903555 ignition[672]: Stage: fetch-offline Oct 9 01:07:51.903597 ignition[672]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:51.903605 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:07:51.907102 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:07:51.903843 ignition[672]: parsed url from cmdline: "" Oct 9 01:07:51.903846 ignition[672]: no config URL provided Oct 9 01:07:51.903852 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:07:51.903859 ignition[672]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:07:51.903886 ignition[672]: op(1): [started] loading QEMU firmware config module Oct 9 01:07:51.903891 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 9 01:07:51.909002 ignition[672]: op(1): [finished] loading QEMU firmware config module Oct 9 01:07:51.951118 ignition[672]: parsing config with SHA512: 994d75c32800e835fb5dcb91d04237456c67c32be7ff97e75477404335efdba8ab147a7461ff31b2ba90d27ee828ad072c9bc49258281c38347b7982decedc19 Oct 9 01:07:51.955204 unknown[672]: fetched base config from "system" Oct 9 01:07:51.955216 unknown[672]: fetched user config from "qemu" Oct 9 01:07:51.957601 ignition[672]: fetch-offline: fetch-offline passed Oct 9 01:07:51.957685 ignition[672]: Ignition finished successfully Oct 9 01:07:51.959637 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:07:51.960715 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 9 01:07:51.967222 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:07:51.978944 ignition[775]: Ignition 2.19.0 Oct 9 01:07:51.978955 ignition[775]: Stage: kargs Oct 9 01:07:51.979139 ignition[775]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:51.979149 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:07:51.980019 ignition[775]: kargs: kargs passed Oct 9 01:07:51.980142 ignition[775]: Ignition finished successfully Oct 9 01:07:51.982832 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:07:51.995239 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:07:52.004961 ignition[783]: Ignition 2.19.0 Oct 9 01:07:52.004971 ignition[783]: Stage: disks Oct 9 01:07:52.005286 ignition[783]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:52.005305 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:07:52.006251 ignition[783]: disks: disks passed Oct 9 01:07:52.006312 ignition[783]: Ignition finished successfully Oct 9 01:07:52.009098 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:07:52.011732 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:07:52.013014 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:07:52.014605 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:07:52.015340 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:07:52.016038 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:07:52.026224 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:07:52.037019 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 9 01:07:52.040482 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:07:52.042807 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:07:52.087906 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:07:52.089082 kernel: EXT4-fs (vda9): mounted filesystem 3a4adf89-ce2b-46a9-8e1a-433a27a27d16 r/w with ordered data mode. Quota mode: none. Oct 9 01:07:52.088984 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:07:52.099140 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:07:52.100676 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:07:52.101693 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 9 01:07:52.101730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:07:52.101751 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:07:52.107646 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:07:52.109509 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (802) Oct 9 01:07:52.109664 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:07:52.114510 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:07:52.114535 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:07:52.114545 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:07:52.117091 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:07:52.117875 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:07:52.154950 initrd-setup-root[826]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:07:52.159005 initrd-setup-root[833]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:07:52.162819 initrd-setup-root[840]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:07:52.166788 initrd-setup-root[847]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:07:52.236572 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:07:52.249220 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:07:52.250621 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:07:52.255074 kernel: BTRFS info (device vda6): last unmount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:07:52.271193 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:07:52.272698 ignition[915]: INFO : Ignition 2.19.0 Oct 9 01:07:52.272698 ignition[915]: INFO : Stage: mount Oct 9 01:07:52.273888 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:52.273888 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:07:52.273888 ignition[915]: INFO : mount: mount passed Oct 9 01:07:52.273888 ignition[915]: INFO : Ignition finished successfully Oct 9 01:07:52.275225 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:07:52.289215 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:07:52.756279 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:07:52.769228 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:07:52.774153 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (929) Oct 9 01:07:52.774181 kernel: BTRFS info (device vda6): first mount of filesystem 6fd98f99-a3f6-49b2-9c3b-44aa7ae4e99b Oct 9 01:07:52.775607 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 9 01:07:52.775629 kernel: BTRFS info (device vda6): using free space tree Oct 9 01:07:52.778066 kernel: BTRFS info (device vda6): auto enabling async discard Oct 9 01:07:52.779097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:07:52.793984 ignition[946]: INFO : Ignition 2.19.0 Oct 9 01:07:52.793984 ignition[946]: INFO : Stage: files Oct 9 01:07:52.795295 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:52.795295 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:07:52.795295 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:07:52.798215 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:07:52.798215 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:07:52.798215 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:07:52.798215 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:07:52.798215 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:07:52.797985 unknown[946]: wrote ssh authorized keys file for user: core Oct 9 01:07:52.803950 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:07:52.803950 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 9 01:07:52.847619 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 9 01:07:53.045571 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 9 01:07:53.045571 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:07:53.048308 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 9 01:07:53.252265 systemd-networkd[768]: eth0: Gained IPv6LL Oct 9 01:07:53.378834 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 9 01:07:53.673825 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 9 01:07:53.673825 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 9 01:07:53.677088 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 9 01:07:53.695537 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:07:53.699064 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 9 01:07:53.701226 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 9 01:07:53.701226 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:07:53.701226 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:07:53.701226 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:07:53.701226 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:07:53.701226 ignition[946]: INFO : files: files passed Oct 9 01:07:53.701226 ignition[946]: INFO : Ignition finished successfully Oct 9 01:07:53.701588 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:07:53.720278 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:07:53.722784 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:07:53.725363 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:07:53.725465 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:07:53.730730 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Oct 9 01:07:53.734202 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:07:53.734202 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:07:53.736683 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:07:53.737422 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:07:53.738963 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:07:53.748352 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:07:53.767611 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:07:53.768597 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:07:53.769706 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:07:53.770981 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:07:53.772331 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:07:53.773085 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:07:53.787804 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:07:53.796233 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:07:53.803744 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:07:53.804710 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:07:53.806201 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:07:53.807520 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:07:53.807636 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:07:53.809515 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:07:53.810943 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:07:53.812125 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:07:53.813668 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:07:53.815041 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:07:53.816492 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:07:53.817896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:07:53.819369 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:07:53.820815 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:07:53.822131 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:07:53.823293 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:07:53.823413 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:07:53.825380 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:07:53.826775 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:07:53.828165 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:07:53.829159 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:07:53.830419 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:07:53.830539 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:07:53.832566 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:07:53.832682 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:07:53.834434 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:07:53.835534 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:07:53.840122 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:07:53.841127 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:07:53.842942 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:07:53.844085 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:07:53.844175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:07:53.845588 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:07:53.845667 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:07:53.846813 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:07:53.846916 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:07:53.848190 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:07:53.848292 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:07:53.862298 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:07:53.862968 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:07:53.863122 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:07:53.868290 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:07:53.868941 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:07:53.869079 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:07:53.870395 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:07:53.870491 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:07:53.874870 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:07:53.876247 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:07:53.880926 ignition[1001]: INFO : Ignition 2.19.0 Oct 9 01:07:53.880926 ignition[1001]: INFO : Stage: umount Oct 9 01:07:53.880926 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:07:53.880926 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 9 01:07:53.880926 ignition[1001]: INFO : umount: umount passed Oct 9 01:07:53.880926 ignition[1001]: INFO : Ignition finished successfully Oct 9 01:07:53.879830 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:07:53.883352 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:07:53.883446 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:07:53.884457 systemd[1]: Stopped target network.target - Network. Oct 9 01:07:53.885623 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:07:53.885679 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:07:53.887043 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:07:53.887151 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:07:53.888334 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:07:53.888372 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:07:53.890551 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:07:53.890597 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:07:53.892622 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:07:53.893991 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:07:53.903104 systemd-networkd[768]: eth0: DHCPv6 lease lost Oct 9 01:07:53.904452 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:07:53.905289 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:07:53.906921 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:07:53.907724 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:07:53.909215 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:07:53.909258 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:07:53.915142 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:07:53.915785 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:07:53.915832 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:07:53.917316 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:07:53.917354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:07:53.918716 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:07:53.918754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:07:53.920216 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:07:53.920252 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:07:53.921768 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:07:53.931389 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:07:53.931511 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:07:53.937017 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:07:53.937181 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:07:53.938976 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:07:53.939050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:07:53.940628 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:07:53.940681 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:07:53.941524 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:07:53.941554 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:07:53.942782 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:07:53.942823 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:07:53.944762 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:07:53.944801 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:07:53.946812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:07:53.946856 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:07:53.948887 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:07:53.948927 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:07:53.960269 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:07:53.961199 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:07:53.961257 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:07:53.962847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:07:53.962890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:53.964678 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:07:53.964751 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:07:53.967559 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:07:53.969256 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:07:53.977765 systemd[1]: Switching root. Oct 9 01:07:54.001145 systemd-journald[238]: Journal stopped Oct 9 01:07:54.683560 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Oct 9 01:07:54.683618 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:07:54.683635 kernel: SELinux: policy capability open_perms=1 Oct 9 01:07:54.683647 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:07:54.683657 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:07:54.683668 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:07:54.683678 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:07:54.683687 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:07:54.683698 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:07:54.683708 kernel: audit: type=1403 audit(1728436074.145:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:07:54.683719 systemd[1]: Successfully loaded SELinux policy in 33.788ms. Oct 9 01:07:54.683735 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.684ms. Oct 9 01:07:54.683747 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:07:54.683758 systemd[1]: Detected virtualization kvm. Oct 9 01:07:54.683768 systemd[1]: Detected architecture arm64. Oct 9 01:07:54.683778 systemd[1]: Detected first boot. Oct 9 01:07:54.683789 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:07:54.683799 zram_generator::config[1047]: No configuration found. Oct 9 01:07:54.683810 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:07:54.683820 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 9 01:07:54.683832 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 9 01:07:54.683843 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 9 01:07:54.683854 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:07:54.683864 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:07:54.683874 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:07:54.683884 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:07:54.683896 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:07:54.683906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:07:54.683918 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:07:54.683928 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:07:54.683941 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:07:54.683952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:07:54.683963 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:07:54.683973 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:07:54.683984 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:07:54.683994 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:07:54.684005 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 9 01:07:54.684017 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:07:54.684028 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 9 01:07:54.684038 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 9 01:07:54.684048 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 9 01:07:54.684088 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:07:54.684101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:07:54.684112 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:07:54.684122 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:07:54.684135 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:07:54.684145 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:07:54.684156 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:07:54.684167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:07:54.684177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:07:54.684187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:07:54.684198 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:07:54.684213 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:07:54.684223 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:07:54.684235 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:07:54.684245 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:07:54.684255 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:07:54.684266 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:07:54.684283 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:07:54.684299 systemd[1]: Reached target machines.target - Containers. Oct 9 01:07:54.684310 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:07:54.684321 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:54.684334 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:07:54.684359 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:07:54.684370 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:07:54.684382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:07:54.684393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:07:54.684403 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:07:54.684415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:07:54.684425 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:07:54.684435 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 9 01:07:54.684448 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 9 01:07:54.684459 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 9 01:07:54.684470 systemd[1]: Stopped systemd-fsck-usr.service. Oct 9 01:07:54.684480 kernel: fuse: init (API version 7.39) Oct 9 01:07:54.684490 kernel: loop: module loaded Oct 9 01:07:54.684500 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:07:54.684510 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:07:54.684521 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:07:54.684531 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:07:54.684545 kernel: ACPI: bus type drm_connector registered Oct 9 01:07:54.684555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:07:54.684566 systemd[1]: verity-setup.service: Deactivated successfully. Oct 9 01:07:54.684576 systemd[1]: Stopped verity-setup.service. Oct 9 01:07:54.684586 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:07:54.684596 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:07:54.684607 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:07:54.684633 systemd-journald[1111]: Collecting audit messages is disabled. Oct 9 01:07:54.684662 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:07:54.684673 systemd-journald[1111]: Journal started Oct 9 01:07:54.684693 systemd-journald[1111]: Runtime Journal (/run/log/journal/36bb054e04a84d52ae4cc1fc7cf8b907) is 5.9M, max 47.3M, 41.4M free. Oct 9 01:07:54.510178 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:07:54.528572 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 9 01:07:54.528938 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 9 01:07:54.686196 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:07:54.686709 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:07:54.687628 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:07:54.688607 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:07:54.689776 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:07:54.689906 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:07:54.691250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:07:54.692146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:07:54.693373 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:07:54.693504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:07:54.694593 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:07:54.695686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:07:54.695818 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:07:54.697287 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:07:54.697425 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:07:54.698475 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:07:54.698605 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:07:54.699902 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:07:54.700992 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:07:54.702305 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:07:54.713953 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:07:54.729161 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:07:54.730946 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:07:54.731802 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:07:54.731838 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:07:54.733492 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:07:54.735302 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:07:54.737051 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:07:54.737927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:54.739241 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:07:54.740853 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:07:54.741869 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:07:54.745240 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:07:54.746184 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:07:54.749030 systemd-journald[1111]: Time spent on flushing to /var/log/journal/36bb054e04a84d52ae4cc1fc7cf8b907 is 21.912ms for 853 entries. Oct 9 01:07:54.749030 systemd-journald[1111]: System Journal (/var/log/journal/36bb054e04a84d52ae4cc1fc7cf8b907) is 8.0M, max 195.6M, 187.6M free. Oct 9 01:07:54.782612 systemd-journald[1111]: Received client request to flush runtime journal. Oct 9 01:07:54.782665 kernel: loop0: detected capacity change from 0 to 194512 Oct 9 01:07:54.750432 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:07:54.753296 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:07:54.756440 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:07:54.758765 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:07:54.759953 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:07:54.763250 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:07:54.764492 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:07:54.765698 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:07:54.771125 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:07:54.783320 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:07:54.789235 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:07:54.795874 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:07:54.797272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:07:54.798145 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:07:54.806136 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:07:54.806764 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:07:54.810387 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:07:54.817295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:07:54.818702 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 01:07:54.830094 kernel: loop1: detected capacity change from 0 to 116808 Oct 9 01:07:54.835007 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Oct 9 01:07:54.835026 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Oct 9 01:07:54.838731 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:07:54.865092 kernel: loop2: detected capacity change from 0 to 113456 Oct 9 01:07:54.916154 kernel: loop3: detected capacity change from 0 to 194512 Oct 9 01:07:54.924080 kernel: loop4: detected capacity change from 0 to 116808 Oct 9 01:07:54.929077 kernel: loop5: detected capacity change from 0 to 113456 Oct 9 01:07:54.932976 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 9 01:07:54.933394 (sd-merge)[1184]: Merged extensions into '/usr'. Oct 9 01:07:54.938510 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:07:54.938528 systemd[1]: Reloading... Oct 9 01:07:55.005098 zram_generator::config[1211]: No configuration found. Oct 9 01:07:55.027157 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:07:55.100374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:07:55.135239 systemd[1]: Reloading finished in 196 ms. Oct 9 01:07:55.161789 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:07:55.163442 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:07:55.176356 systemd[1]: Starting ensure-sysext.service... Oct 9 01:07:55.178186 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:07:55.186493 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:07:55.186509 systemd[1]: Reloading... Oct 9 01:07:55.195826 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:07:55.196105 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:07:55.196741 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:07:55.196972 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Oct 9 01:07:55.197014 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Oct 9 01:07:55.199943 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:07:55.200046 systemd-tmpfiles[1247]: Skipping /boot Oct 9 01:07:55.207348 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:07:55.207454 systemd-tmpfiles[1247]: Skipping /boot Oct 9 01:07:55.233088 zram_generator::config[1272]: No configuration found. Oct 9 01:07:55.310512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:07:55.345144 systemd[1]: Reloading finished in 158 ms. Oct 9 01:07:55.359814 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:07:55.361009 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:07:55.378566 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:07:55.380737 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:07:55.382582 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:07:55.385208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:07:55.389220 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:07:55.394002 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:07:55.399316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:55.400686 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:07:55.403291 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:07:55.406043 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:07:55.407005 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:55.410800 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:07:55.412978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:55.413183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:55.419919 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:07:55.422045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:07:55.424475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:07:55.424602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:07:55.427031 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Oct 9 01:07:55.429334 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:07:55.433635 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:07:55.434144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:07:55.440082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:07:55.441378 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:07:55.441495 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:07:55.443317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:07:55.445227 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:07:55.449509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:07:55.457247 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:07:55.459075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:07:55.461943 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:07:55.463328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:07:55.466771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:07:55.469421 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:07:55.469476 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:07:55.469699 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:07:55.473308 systemd[1]: Finished ensure-sysext.service. Oct 9 01:07:55.475514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:07:55.475667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:07:55.477452 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:07:55.477573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:07:55.486125 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1348) Oct 9 01:07:55.489122 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 9 01:07:55.492078 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1348) Oct 9 01:07:55.498265 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:07:55.507208 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:07:55.507822 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:07:55.509281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:07:55.527259 augenrules[1385]: No rules Oct 9 01:07:55.528202 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:07:55.528381 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:07:55.565649 systemd-resolved[1313]: Positive Trust Anchors: Oct 9 01:07:55.565826 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:07:55.565859 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:07:55.583375 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1333) Oct 9 01:07:55.573980 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:07:55.576449 systemd-resolved[1313]: Defaulting to hostname 'linux'. Oct 9 01:07:55.582410 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:07:55.585010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:07:55.585408 systemd-networkd[1364]: lo: Link UP Oct 9 01:07:55.585411 systemd-networkd[1364]: lo: Gained carrier Oct 9 01:07:55.587103 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:07:55.589653 systemd-networkd[1364]: Enumeration completed Oct 9 01:07:55.589710 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:07:55.591153 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:55.591164 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:07:55.592020 systemd-networkd[1364]: eth0: Link UP Oct 9 01:07:55.592028 systemd-networkd[1364]: eth0: Gained carrier Oct 9 01:07:55.592040 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:07:55.592382 systemd[1]: Reached target network.target - Network. Oct 9 01:07:55.601234 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:07:55.609132 systemd-networkd[1364]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:07:55.609958 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Oct 9 01:07:55.610264 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 9 01:07:55.610954 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 9 01:07:55.610999 systemd-timesyncd[1377]: Initial clock synchronization to Wed 2024-10-09 01:07:55.214065 UTC. Oct 9 01:07:55.617328 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:07:55.627016 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:07:55.632098 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:07:55.642111 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:07:55.652316 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:07:55.666650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:07:55.668785 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:07:55.703475 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:07:55.704601 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:07:55.707169 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:07:55.707985 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:07:55.708899 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:07:55.710010 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:07:55.710917 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:07:55.711857 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:07:55.712767 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:07:55.712802 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:07:55.713557 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:07:55.715143 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:07:55.717376 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:07:55.732980 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:07:55.734921 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:07:55.736235 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:07:55.737115 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:07:55.737801 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:07:55.738516 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:07:55.738547 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:07:55.739421 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:07:55.741114 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:07:55.742578 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:07:55.745200 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:07:55.746947 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:07:55.747739 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:07:55.749503 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:07:55.754100 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:07:55.757215 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:07:55.760367 jq[1414]: false Oct 9 01:07:55.760257 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:07:55.765263 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:07:55.772014 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:07:55.772435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:07:55.773227 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:07:55.773852 dbus-daemon[1413]: [system] SELinux support is enabled Oct 9 01:07:55.774927 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:07:55.776311 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:07:55.779769 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:07:55.781608 extend-filesystems[1415]: Found loop3 Oct 9 01:07:55.781608 extend-filesystems[1415]: Found loop4 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found loop5 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda1 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda2 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda3 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found usr Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda4 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda6 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda7 Oct 9 01:07:55.784161 extend-filesystems[1415]: Found vda9 Oct 9 01:07:55.784161 extend-filesystems[1415]: Checking size of /dev/vda9 Oct 9 01:07:55.783805 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:07:55.783966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:07:55.798187 jq[1426]: true Oct 9 01:07:55.786489 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:07:55.786625 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:07:55.794350 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:07:55.794396 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:07:55.797469 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:07:55.797492 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:07:55.807474 jq[1434]: true Oct 9 01:07:55.818101 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:07:55.819816 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:07:55.819990 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:07:55.822390 tar[1430]: linux-arm64/helm Oct 9 01:07:55.823490 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Oct 9 01:07:55.826082 extend-filesystems[1415]: Resized partition /dev/vda9 Oct 9 01:07:55.828161 update_engine[1425]: I20241009 01:07:55.825868 1425 main.cc:92] Flatcar Update Engine starting Oct 9 01:07:55.827002 systemd-logind[1421]: New seat seat0. Oct 9 01:07:55.828505 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:07:55.829244 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:07:55.832485 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:07:55.833464 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 9 01:07:55.833515 update_engine[1425]: I20241009 01:07:55.833133 1425 update_check_scheduler.cc:74] Next update check in 10m22s Oct 9 01:07:55.838089 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1342) Oct 9 01:07:55.842326 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:07:55.875530 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 9 01:07:55.887521 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 9 01:07:55.887521 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 9 01:07:55.887521 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 9 01:07:55.894502 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Oct 9 01:07:55.902224 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:07:55.888865 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:07:55.889033 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:07:55.895307 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:07:55.897462 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 9 01:07:55.911228 locksmithd[1465]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:07:56.039760 containerd[1441]: time="2024-10-09T01:07:56.039680835Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:07:56.068998 containerd[1441]: time="2024-10-09T01:07:56.068921621Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:56.070446 containerd[1441]: time="2024-10-09T01:07:56.070402543Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:56.070528 containerd[1441]: time="2024-10-09T01:07:56.070514120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:07:56.070584 containerd[1441]: time="2024-10-09T01:07:56.070572969Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:07:56.070836 containerd[1441]: time="2024-10-09T01:07:56.070814562Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:07:56.070964 containerd[1441]: time="2024-10-09T01:07:56.070948189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:56.071176 containerd[1441]: time="2024-10-09T01:07:56.071153971Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:56.071295 containerd[1441]: time="2024-10-09T01:07:56.071279843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:56.071586 containerd[1441]: time="2024-10-09T01:07:56.071558996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:56.071691 containerd[1441]: time="2024-10-09T01:07:56.071675364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:56.071800 containerd[1441]: time="2024-10-09T01:07:56.071784889Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:56.071875 containerd[1441]: time="2024-10-09T01:07:56.071846589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:56.072108 containerd[1441]: time="2024-10-09T01:07:56.072033096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:56.072499 containerd[1441]: time="2024-10-09T01:07:56.072470587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:07:56.072721 containerd[1441]: time="2024-10-09T01:07:56.072700851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:07:56.072853 containerd[1441]: time="2024-10-09T01:07:56.072836645Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:07:56.073049 containerd[1441]: time="2024-10-09T01:07:56.073031174Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:07:56.073229 containerd[1441]: time="2024-10-09T01:07:56.073210801Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:07:56.076572 containerd[1441]: time="2024-10-09T01:07:56.076548777Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:07:56.076715 containerd[1441]: time="2024-10-09T01:07:56.076696623Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:07:56.076881 containerd[1441]: time="2024-10-09T01:07:56.076829451Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:07:56.076948 containerd[1441]: time="2024-10-09T01:07:56.076934490Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:07:56.076997 containerd[1441]: time="2024-10-09T01:07:56.076986306Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:07:56.077238 containerd[1441]: time="2024-10-09T01:07:56.077217559Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077636840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077758226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077774268Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077787498Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077799549Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077810498Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077821903Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077833764Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077845663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077856764Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077867028Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077876418Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077894362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079105 containerd[1441]: time="2024-10-09T01:07:56.077906831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.077917057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.077929527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.077941160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.077951918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.077962031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.077973816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.077985563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078015938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078028217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078038519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078048555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078073532Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078095011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078107405Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079379 containerd[1441]: time="2024-10-09T01:07:56.078118847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078233428Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078249737Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078258405Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078270380Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078278782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078290453Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078300109Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:07:56.079609 containerd[1441]: time="2024-10-09T01:07:56.078310069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:07:56.079734 containerd[1441]: time="2024-10-09T01:07:56.078543146Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:07:56.079734 containerd[1441]: time="2024-10-09T01:07:56.078584584Z" level=info msg="Connect containerd service" Oct 9 01:07:56.079734 containerd[1441]: time="2024-10-09T01:07:56.078608344Z" level=info msg="using legacy CRI server" Oct 9 01:07:56.079734 containerd[1441]: time="2024-10-09T01:07:56.078614503Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:07:56.079734 containerd[1441]: time="2024-10-09T01:07:56.078700914Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:07:56.079734 containerd[1441]: time="2024-10-09T01:07:56.079275453Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:07:56.080121 containerd[1441]: time="2024-10-09T01:07:56.080048323Z" level=info msg="Start subscribing containerd event" Oct 9 01:07:56.080156 containerd[1441]: time="2024-10-09T01:07:56.080135190Z" level=info msg="Start recovering state" Oct 9 01:07:56.080316 containerd[1441]: time="2024-10-09T01:07:56.080284366Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:07:56.080421 containerd[1441]: time="2024-10-09T01:07:56.080404155Z" level=info msg="Start event monitor" Oct 9 01:07:56.080446 containerd[1441]: time="2024-10-09T01:07:56.080430006Z" level=info msg="Start snapshots syncer" Oct 9 01:07:56.080572 containerd[1441]: time="2024-10-09T01:07:56.080554965Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:07:56.081263 containerd[1441]: time="2024-10-09T01:07:56.080534702Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:07:56.081304 containerd[1441]: time="2024-10-09T01:07:56.081263892Z" level=info msg="Start streaming server" Oct 9 01:07:56.082661 containerd[1441]: time="2024-10-09T01:07:56.081759281Z" level=info msg="containerd successfully booted in 0.046189s" Oct 9 01:07:56.082151 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:07:56.192463 tar[1430]: linux-arm64/LICENSE Oct 9 01:07:56.192605 tar[1430]: linux-arm64/README.md Oct 9 01:07:56.205439 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:07:56.884241 sshd_keygen[1424]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:07:56.901567 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:07:56.911283 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:07:56.915997 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:07:56.916203 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:07:56.918447 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:07:56.929528 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:07:56.932470 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:07:56.934370 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 9 01:07:56.935595 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:07:57.092192 systemd-networkd[1364]: eth0: Gained IPv6LL Oct 9 01:07:57.094316 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:07:57.096031 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:07:57.108371 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 9 01:07:57.110509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:07:57.112255 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:07:57.125330 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 9 01:07:57.125528 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 9 01:07:57.126733 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:07:57.131228 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:07:57.557927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:07:57.559385 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:07:57.561973 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:07:57.562190 systemd[1]: Startup finished in 543ms (kernel) + 4.440s (initrd) + 3.452s (userspace) = 8.437s. Oct 9 01:07:57.996843 kubelet[1526]: E1009 01:07:57.996704 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:07:57.999223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:07:57.999371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:02.115702 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:08:02.116736 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:55592.service - OpenSSH per-connection server daemon (10.0.0.1:55592). Oct 9 01:08:02.159696 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 55592 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:08:02.161252 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:02.169997 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:08:02.182310 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:08:02.183894 systemd-logind[1421]: New session 1 of user core. Oct 9 01:08:02.190842 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:08:02.193736 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:08:02.201606 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:08:02.272913 systemd[1544]: Queued start job for default target default.target. Oct 9 01:08:02.279908 systemd[1544]: Created slice app.slice - User Application Slice. Oct 9 01:08:02.279951 systemd[1544]: Reached target paths.target - Paths. Oct 9 01:08:02.279963 systemd[1544]: Reached target timers.target - Timers. Oct 9 01:08:02.281117 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:08:02.290362 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:08:02.290421 systemd[1544]: Reached target sockets.target - Sockets. Oct 9 01:08:02.290432 systemd[1544]: Reached target basic.target - Basic System. Oct 9 01:08:02.290465 systemd[1544]: Reached target default.target - Main User Target. Oct 9 01:08:02.290487 systemd[1544]: Startup finished in 84ms. Oct 9 01:08:02.290729 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:08:02.291938 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:08:02.376413 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:55606.service - OpenSSH per-connection server daemon (10.0.0.1:55606). Oct 9 01:08:02.405499 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 55606 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:08:02.406984 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:02.410739 systemd-logind[1421]: New session 2 of user core. Oct 9 01:08:02.420256 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:08:02.471096 sshd[1555]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:02.482308 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:55606.service: Deactivated successfully. Oct 9 01:08:02.483554 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:08:02.484725 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:08:02.485768 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:55622.service - OpenSSH per-connection server daemon (10.0.0.1:55622). Oct 9 01:08:02.486470 systemd-logind[1421]: Removed session 2. Oct 9 01:08:02.519704 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 55622 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:08:02.520837 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:02.524531 systemd-logind[1421]: New session 3 of user core. Oct 9 01:08:02.534194 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:08:02.581166 sshd[1562]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:02.589324 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:55622.service: Deactivated successfully. Oct 9 01:08:02.590637 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:08:02.593137 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:08:02.594204 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:55636.service - OpenSSH per-connection server daemon (10.0.0.1:55636). Oct 9 01:08:02.596428 systemd-logind[1421]: Removed session 3. Oct 9 01:08:02.627146 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 55636 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:08:02.628406 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:02.633934 systemd-logind[1421]: New session 4 of user core. Oct 9 01:08:02.647132 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:08:02.698075 sshd[1569]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:02.710114 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:55636.service: Deactivated successfully. Oct 9 01:08:02.712366 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:08:02.713440 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:08:02.714433 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:34538.service - OpenSSH per-connection server daemon (10.0.0.1:34538). Oct 9 01:08:02.715098 systemd-logind[1421]: Removed session 4. Oct 9 01:08:02.746325 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 34538 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:08:02.747424 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:02.751765 systemd-logind[1421]: New session 5 of user core. Oct 9 01:08:02.767191 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:08:02.824225 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:08:02.824706 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:08:02.840831 sudo[1579]: pam_unix(sudo:session): session closed for user root Oct 9 01:08:02.842596 sshd[1576]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:02.852344 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:34538.service: Deactivated successfully. Oct 9 01:08:02.854313 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:08:02.855543 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:08:02.856814 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:34540.service - OpenSSH per-connection server daemon (10.0.0.1:34540). Oct 9 01:08:02.857501 systemd-logind[1421]: Removed session 5. Oct 9 01:08:02.889558 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 34540 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:08:02.890869 sshd[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:02.895129 systemd-logind[1421]: New session 6 of user core. Oct 9 01:08:02.900229 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:08:02.949816 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:08:02.950323 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:08:02.953260 sudo[1588]: pam_unix(sudo:session): session closed for user root Oct 9 01:08:02.957625 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:08:02.957893 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:08:02.976496 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:08:02.999433 augenrules[1610]: No rules Oct 9 01:08:03.000357 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:08:03.000501 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:08:03.002449 sudo[1587]: pam_unix(sudo:session): session closed for user root Oct 9 01:08:03.004877 sshd[1584]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:03.013260 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:34540.service: Deactivated successfully. Oct 9 01:08:03.016354 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:08:03.017695 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:08:03.031537 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:34556.service - OpenSSH per-connection server daemon (10.0.0.1:34556). Oct 9 01:08:03.035119 systemd-logind[1421]: Removed session 6. Oct 9 01:08:03.062751 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 34556 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:08:03.063907 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:08:03.068145 systemd-logind[1421]: New session 7 of user core. Oct 9 01:08:03.076448 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:08:03.127278 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:08:03.127553 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:08:03.435365 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:08:03.435467 (dockerd)[1642]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:08:03.669874 dockerd[1642]: time="2024-10-09T01:08:03.669812806Z" level=info msg="Starting up" Oct 9 01:08:03.808550 dockerd[1642]: time="2024-10-09T01:08:03.808418569Z" level=info msg="Loading containers: start." Oct 9 01:08:03.948276 kernel: Initializing XFRM netlink socket Oct 9 01:08:04.025746 systemd-networkd[1364]: docker0: Link UP Oct 9 01:08:04.057407 dockerd[1642]: time="2024-10-09T01:08:04.057349904Z" level=info msg="Loading containers: done." Oct 9 01:08:04.068653 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1755734826-merged.mount: Deactivated successfully. Oct 9 01:08:04.070435 dockerd[1642]: time="2024-10-09T01:08:04.070384136Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:08:04.070511 dockerd[1642]: time="2024-10-09T01:08:04.070477085Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:08:04.070613 dockerd[1642]: time="2024-10-09T01:08:04.070585721Z" level=info msg="Daemon has completed initialization" Oct 9 01:08:04.099624 dockerd[1642]: time="2024-10-09T01:08:04.099576600Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:08:04.099773 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:08:04.728303 containerd[1441]: time="2024-10-09T01:08:04.728266897Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 01:08:05.372753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199479294.mount: Deactivated successfully. Oct 9 01:08:06.850683 containerd[1441]: time="2024-10-09T01:08:06.850635267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:06.851110 containerd[1441]: time="2024-10-09T01:08:06.851070672Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286060" Oct 9 01:08:06.851971 containerd[1441]: time="2024-10-09T01:08:06.851938914Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:06.854786 containerd[1441]: time="2024-10-09T01:08:06.854759862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:06.856027 containerd[1441]: time="2024-10-09T01:08:06.855992487Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.127687419s" Oct 9 01:08:06.856027 containerd[1441]: time="2024-10-09T01:08:06.856025767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 9 01:08:06.874779 containerd[1441]: time="2024-10-09T01:08:06.874708589Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 01:08:08.249705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:08:08.262269 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:08.370521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:08.374100 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:08.416952 kubelet[1915]: E1009 01:08:08.416889 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:08.420571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:08.420708 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:09.907791 containerd[1441]: time="2024-10-09T01:08:09.907725311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:09.908197 containerd[1441]: time="2024-10-09T01:08:09.908156786Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374206" Oct 9 01:08:09.909100 containerd[1441]: time="2024-10-09T01:08:09.909072433Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:09.912102 containerd[1441]: time="2024-10-09T01:08:09.912063895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:09.913353 containerd[1441]: time="2024-10-09T01:08:09.913237982Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 3.038490863s" Oct 9 01:08:09.913353 containerd[1441]: time="2024-10-09T01:08:09.913269385Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 9 01:08:09.931254 containerd[1441]: time="2024-10-09T01:08:09.931211777Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 01:08:10.936385 containerd[1441]: time="2024-10-09T01:08:10.936338777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:10.937291 containerd[1441]: time="2024-10-09T01:08:10.937068831Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751219" Oct 9 01:08:10.938084 containerd[1441]: time="2024-10-09T01:08:10.938013510Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:10.941081 containerd[1441]: time="2024-10-09T01:08:10.941036889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:10.942261 containerd[1441]: time="2024-10-09T01:08:10.942233269Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.010982705s" Oct 9 01:08:10.942310 containerd[1441]: time="2024-10-09T01:08:10.942264587Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 9 01:08:10.960274 containerd[1441]: time="2024-10-09T01:08:10.960240052Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 01:08:11.916562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1562810119.mount: Deactivated successfully. Oct 9 01:08:12.286227 containerd[1441]: time="2024-10-09T01:08:12.286092210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:12.286533 containerd[1441]: time="2024-10-09T01:08:12.286498059Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254040" Oct 9 01:08:12.287425 containerd[1441]: time="2024-10-09T01:08:12.287392074Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:12.289284 containerd[1441]: time="2024-10-09T01:08:12.289252039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:12.290099 containerd[1441]: time="2024-10-09T01:08:12.289976889Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.329696493s" Oct 9 01:08:12.290099 containerd[1441]: time="2024-10-09T01:08:12.290010571Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 9 01:08:12.308118 containerd[1441]: time="2024-10-09T01:08:12.308085391Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:08:12.892748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883050231.mount: Deactivated successfully. Oct 9 01:08:13.462230 containerd[1441]: time="2024-10-09T01:08:13.462177562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:13.463236 containerd[1441]: time="2024-10-09T01:08:13.462949984Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Oct 9 01:08:13.463838 containerd[1441]: time="2024-10-09T01:08:13.463807409Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:13.467041 containerd[1441]: time="2024-10-09T01:08:13.467002912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:13.470675 containerd[1441]: time="2024-10-09T01:08:13.470634768Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.162509205s" Oct 9 01:08:13.470675 containerd[1441]: time="2024-10-09T01:08:13.470674961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 9 01:08:13.489395 containerd[1441]: time="2024-10-09T01:08:13.489349787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:08:13.966274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3796739484.mount: Deactivated successfully. Oct 9 01:08:13.969885 containerd[1441]: time="2024-10-09T01:08:13.969834727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:13.970976 containerd[1441]: time="2024-10-09T01:08:13.970774249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Oct 9 01:08:13.971674 containerd[1441]: time="2024-10-09T01:08:13.971647910Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:13.974838 containerd[1441]: time="2024-10-09T01:08:13.974803658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:13.975588 containerd[1441]: time="2024-10-09T01:08:13.975473170Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 486.086373ms" Oct 9 01:08:13.975588 containerd[1441]: time="2024-10-09T01:08:13.975503374Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 9 01:08:13.994512 containerd[1441]: time="2024-10-09T01:08:13.994441126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 01:08:14.528143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2240520811.mount: Deactivated successfully. Oct 9 01:08:16.449653 containerd[1441]: time="2024-10-09T01:08:16.449594694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:16.450669 containerd[1441]: time="2024-10-09T01:08:16.450620799Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Oct 9 01:08:16.451592 containerd[1441]: time="2024-10-09T01:08:16.451565466Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:16.455090 containerd[1441]: time="2024-10-09T01:08:16.454755319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:16.455989 containerd[1441]: time="2024-10-09T01:08:16.455961206Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.461485964s" Oct 9 01:08:16.455989 containerd[1441]: time="2024-10-09T01:08:16.455986997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 9 01:08:18.595165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:08:18.604221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:18.685769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:18.689771 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:08:18.727261 kubelet[2145]: E1009 01:08:18.727212 2145 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:08:18.730243 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:08:18.730394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:08:21.366450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:21.375447 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:21.391308 systemd[1]: Reloading requested from client PID 2161 ('systemctl') (unit session-7.scope)... Oct 9 01:08:21.391323 systemd[1]: Reloading... Oct 9 01:08:21.444296 zram_generator::config[2200]: No configuration found. Oct 9 01:08:21.552530 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:08:21.603881 systemd[1]: Reloading finished in 212 ms. Oct 9 01:08:21.650645 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 9 01:08:21.650705 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 9 01:08:21.650904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:21.653984 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:21.742178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:21.746020 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:08:21.787223 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:08:21.787223 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:08:21.787223 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:08:21.787542 kubelet[2246]: I1009 01:08:21.787267 2246 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:08:22.833234 kubelet[2246]: I1009 01:08:22.833195 2246 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:08:22.833234 kubelet[2246]: I1009 01:08:22.833223 2246 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:08:22.833569 kubelet[2246]: I1009 01:08:22.833429 2246 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:08:22.873248 kubelet[2246]: E1009 01:08:22.873220 2246 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.874271 kubelet[2246]: I1009 01:08:22.874199 2246 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:08:22.881722 kubelet[2246]: I1009 01:08:22.881696 2246 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:08:22.882686 kubelet[2246]: I1009 01:08:22.882651 2246 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:08:22.883046 kubelet[2246]: I1009 01:08:22.883022 2246 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:08:22.883141 kubelet[2246]: I1009 01:08:22.883050 2246 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:08:22.884215 kubelet[2246]: I1009 01:08:22.883076 2246 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:08:22.884378 kubelet[2246]: I1009 01:08:22.884349 2246 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:08:22.886784 kubelet[2246]: I1009 01:08:22.886755 2246 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:08:22.886822 kubelet[2246]: I1009 01:08:22.886792 2246 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:08:22.887163 kubelet[2246]: I1009 01:08:22.887141 2246 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:08:22.887187 kubelet[2246]: I1009 01:08:22.887171 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:08:22.887417 kubelet[2246]: W1009 01:08:22.887333 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.887417 kubelet[2246]: E1009 01:08:22.887391 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.887781 kubelet[2246]: W1009 01:08:22.887749 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.887861 kubelet[2246]: E1009 01:08:22.887850 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.889074 kubelet[2246]: I1009 01:08:22.889042 2246 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:08:22.891753 kubelet[2246]: I1009 01:08:22.891732 2246 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:08:22.892319 kubelet[2246]: W1009 01:08:22.892290 2246 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:08:22.893395 kubelet[2246]: I1009 01:08:22.893255 2246 server.go:1256] "Started kubelet" Oct 9 01:08:22.893800 kubelet[2246]: I1009 01:08:22.893780 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:08:22.894839 kubelet[2246]: I1009 01:08:22.894274 2246 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:08:22.894839 kubelet[2246]: I1009 01:08:22.894355 2246 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:08:22.895384 kubelet[2246]: I1009 01:08:22.895360 2246 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:08:22.897455 kubelet[2246]: I1009 01:08:22.895730 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:08:22.897455 kubelet[2246]: I1009 01:08:22.897173 2246 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:08:22.897455 kubelet[2246]: I1009 01:08:22.897254 2246 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:08:22.897455 kubelet[2246]: I1009 01:08:22.897328 2246 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:08:22.897671 kubelet[2246]: W1009 01:08:22.897628 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.897708 kubelet[2246]: E1009 01:08:22.897678 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.898228 kubelet[2246]: E1009 01:08:22.898200 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms" Oct 9 01:08:22.898887 kubelet[2246]: I1009 01:08:22.898866 2246 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:08:22.899610 kubelet[2246]: I1009 01:08:22.899012 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:08:22.899610 kubelet[2246]: E1009 01:08:22.899037 2246 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:08:22.899610 kubelet[2246]: E1009 01:08:22.899579 2246 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17fca37a59e94be6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-10-09 01:08:22.893226982 +0000 UTC m=+1.143821263,LastTimestamp:2024-10-09 01:08:22.893226982 +0000 UTC m=+1.143821263,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 9 01:08:22.900786 kubelet[2246]: I1009 01:08:22.900353 2246 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:08:22.910828 kubelet[2246]: I1009 01:08:22.910661 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:08:22.912099 kubelet[2246]: I1009 01:08:22.912016 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:08:22.912099 kubelet[2246]: I1009 01:08:22.912044 2246 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:08:22.912099 kubelet[2246]: I1009 01:08:22.912075 2246 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:08:22.912220 kubelet[2246]: E1009 01:08:22.912152 2246 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:08:22.914963 kubelet[2246]: I1009 01:08:22.914677 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:08:22.914963 kubelet[2246]: I1009 01:08:22.914696 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:08:22.914963 kubelet[2246]: I1009 01:08:22.914714 2246 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:08:22.915179 kubelet[2246]: W1009 01:08:22.915114 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.915179 kubelet[2246]: E1009 01:08:22.915169 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:22.999541 kubelet[2246]: I1009 01:08:22.999506 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:08:23.002042 kubelet[2246]: E1009 01:08:23.002013 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Oct 9 01:08:23.012292 kubelet[2246]: E1009 01:08:23.012264 2246 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 9 01:08:23.027756 kubelet[2246]: I1009 01:08:23.027707 2246 policy_none.go:49] "None policy: Start" Oct 9 01:08:23.028500 kubelet[2246]: I1009 01:08:23.028484 2246 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:08:23.028567 kubelet[2246]: I1009 01:08:23.028520 2246 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:08:23.033166 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 9 01:08:23.047765 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 9 01:08:23.050295 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 9 01:08:23.058688 kubelet[2246]: I1009 01:08:23.058663 2246 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:08:23.059029 kubelet[2246]: I1009 01:08:23.058933 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:08:23.059979 kubelet[2246]: E1009 01:08:23.059960 2246 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 9 01:08:23.099185 kubelet[2246]: E1009 01:08:23.099160 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms" Oct 9 01:08:23.203356 kubelet[2246]: I1009 01:08:23.203316 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:08:23.203630 kubelet[2246]: E1009 01:08:23.203606 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Oct 9 01:08:23.212799 kubelet[2246]: I1009 01:08:23.212770 2246 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:08:23.213500 kubelet[2246]: I1009 01:08:23.213476 2246 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:08:23.214194 kubelet[2246]: I1009 01:08:23.214089 2246 topology_manager.go:215] "Topology Admit Handler" podUID="9c2a75196a0427827978d3f774846c88" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:08:23.219048 systemd[1]: Created slice kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice - libcontainer container kubepods-burstable-podb21621a72929ad4d87bc59a877761c7f.slice. Oct 9 01:08:23.239752 systemd[1]: Created slice kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice - libcontainer container kubepods-burstable-podf13040d390753ac4a1fef67bb9676230.slice. Oct 9 01:08:23.252547 systemd[1]: Created slice kubepods-burstable-pod9c2a75196a0427827978d3f774846c88.slice - libcontainer container kubepods-burstable-pod9c2a75196a0427827978d3f774846c88.slice. Oct 9 01:08:23.300333 kubelet[2246]: I1009 01:08:23.300296 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:23.300399 kubelet[2246]: I1009 01:08:23.300358 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:23.300439 kubelet[2246]: I1009 01:08:23.300399 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:23.300439 kubelet[2246]: I1009 01:08:23.300426 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:23.300481 kubelet[2246]: I1009 01:08:23.300450 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:08:23.300481 kubelet[2246]: I1009 01:08:23.300471 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c2a75196a0427827978d3f774846c88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c2a75196a0427827978d3f774846c88\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:08:23.300520 kubelet[2246]: I1009 01:08:23.300506 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c2a75196a0427827978d3f774846c88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c2a75196a0427827978d3f774846c88\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:08:23.300572 kubelet[2246]: I1009 01:08:23.300547 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c2a75196a0427827978d3f774846c88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9c2a75196a0427827978d3f774846c88\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:08:23.300608 kubelet[2246]: I1009 01:08:23.300586 2246 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:23.500006 kubelet[2246]: E1009 01:08:23.499917 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms" Oct 9 01:08:23.541092 kubelet[2246]: E1009 01:08:23.541042 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:23.541632 containerd[1441]: time="2024-10-09T01:08:23.541587389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,}" Oct 9 01:08:23.551190 kubelet[2246]: E1009 01:08:23.551161 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:23.552150 containerd[1441]: time="2024-10-09T01:08:23.552112703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,}" Oct 9 01:08:23.554351 kubelet[2246]: E1009 01:08:23.554312 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:23.554644 containerd[1441]: time="2024-10-09T01:08:23.554605298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9c2a75196a0427827978d3f774846c88,Namespace:kube-system,Attempt:0,}" Oct 9 01:08:23.605288 kubelet[2246]: I1009 01:08:23.605261 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:08:23.605557 kubelet[2246]: E1009 01:08:23.605532 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Oct 9 01:08:23.835870 kubelet[2246]: W1009 01:08:23.835747 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:23.835870 kubelet[2246]: E1009 01:08:23.835813 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:24.043720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162469538.mount: Deactivated successfully. Oct 9 01:08:24.048526 containerd[1441]: time="2024-10-09T01:08:24.048112068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:08:24.048697 containerd[1441]: time="2024-10-09T01:08:24.048660500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:08:24.049231 containerd[1441]: time="2024-10-09T01:08:24.049201981Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:08:24.050189 containerd[1441]: time="2024-10-09T01:08:24.050158252Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:08:24.052323 containerd[1441]: time="2024-10-09T01:08:24.052268200Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:08:24.054095 containerd[1441]: time="2024-10-09T01:08:24.052931057Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:08:24.054095 containerd[1441]: time="2024-10-09T01:08:24.053673740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 9 01:08:24.055751 containerd[1441]: time="2024-10-09T01:08:24.055704822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:08:24.058112 containerd[1441]: time="2024-10-09T01:08:24.058073226Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 503.411522ms" Oct 9 01:08:24.059547 containerd[1441]: time="2024-10-09T01:08:24.059509010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 517.840889ms" Oct 9 01:08:24.060239 containerd[1441]: time="2024-10-09T01:08:24.060199315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.025693ms" Oct 9 01:08:24.210838 containerd[1441]: time="2024-10-09T01:08:24.210249618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:24.210838 containerd[1441]: time="2024-10-09T01:08:24.210423772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:24.210838 containerd[1441]: time="2024-10-09T01:08:24.210453178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:24.210838 containerd[1441]: time="2024-10-09T01:08:24.210543871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:24.213694 containerd[1441]: time="2024-10-09T01:08:24.213601340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:24.213766 containerd[1441]: time="2024-10-09T01:08:24.213295262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:24.213766 containerd[1441]: time="2024-10-09T01:08:24.213726912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:24.213766 containerd[1441]: time="2024-10-09T01:08:24.213743333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:24.213846 containerd[1441]: time="2024-10-09T01:08:24.213729349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:24.213874 containerd[1441]: time="2024-10-09T01:08:24.213829111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:24.213874 containerd[1441]: time="2024-10-09T01:08:24.213837102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:24.214414 containerd[1441]: time="2024-10-09T01:08:24.214334954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:24.229439 systemd[1]: Started cri-containerd-21f8f7455f720ccf343e491ec8d49cb770071a3e5b5ae40c59705d7c0dd0f3d5.scope - libcontainer container 21f8f7455f720ccf343e491ec8d49cb770071a3e5b5ae40c59705d7c0dd0f3d5. Oct 9 01:08:24.230527 kubelet[2246]: W1009 01:08:24.230463 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:24.230527 kubelet[2246]: E1009 01:08:24.230532 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:24.233952 systemd[1]: Started cri-containerd-85698d79571a0c085cbc2f3e5a8e567403fa0ef97629164204a0c8da2537881e.scope - libcontainer container 85698d79571a0c085cbc2f3e5a8e567403fa0ef97629164204a0c8da2537881e. Oct 9 01:08:24.237524 systemd[1]: Started cri-containerd-52556e474892004dcf70fdfe53ca50cdcda1fbdf1c7170158095cb6fc2011747.scope - libcontainer container 52556e474892004dcf70fdfe53ca50cdcda1fbdf1c7170158095cb6fc2011747. Oct 9 01:08:24.264193 containerd[1441]: time="2024-10-09T01:08:24.263748048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9c2a75196a0427827978d3f774846c88,Namespace:kube-system,Attempt:0,} returns sandbox id \"85698d79571a0c085cbc2f3e5a8e567403fa0ef97629164204a0c8da2537881e\"" Oct 9 01:08:24.266295 kubelet[2246]: E1009 01:08:24.266258 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:24.267977 containerd[1441]: time="2024-10-09T01:08:24.267948328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f13040d390753ac4a1fef67bb9676230,Namespace:kube-system,Attempt:0,} returns sandbox id \"21f8f7455f720ccf343e491ec8d49cb770071a3e5b5ae40c59705d7c0dd0f3d5\"" Oct 9 01:08:24.269879 kubelet[2246]: E1009 01:08:24.269203 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:24.270455 containerd[1441]: time="2024-10-09T01:08:24.270422846Z" level=info msg="CreateContainer within sandbox \"85698d79571a0c085cbc2f3e5a8e567403fa0ef97629164204a0c8da2537881e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:08:24.271404 containerd[1441]: time="2024-10-09T01:08:24.271371286Z" level=info msg="CreateContainer within sandbox \"21f8f7455f720ccf343e491ec8d49cb770071a3e5b5ae40c59705d7c0dd0f3d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:08:24.275213 containerd[1441]: time="2024-10-09T01:08:24.275170241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b21621a72929ad4d87bc59a877761c7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"52556e474892004dcf70fdfe53ca50cdcda1fbdf1c7170158095cb6fc2011747\"" Oct 9 01:08:24.276135 kubelet[2246]: E1009 01:08:24.276109 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:24.279361 containerd[1441]: time="2024-10-09T01:08:24.279310991Z" level=info msg="CreateContainer within sandbox \"52556e474892004dcf70fdfe53ca50cdcda1fbdf1c7170158095cb6fc2011747\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:08:24.298000 containerd[1441]: time="2024-10-09T01:08:24.297716219Z" level=info msg="CreateContainer within sandbox \"85698d79571a0c085cbc2f3e5a8e567403fa0ef97629164204a0c8da2537881e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8fda4de66b04bac2c803f3023a977dd99eb31fa3e2349df7b1e5239006faa69a\"" Oct 9 01:08:24.298925 containerd[1441]: time="2024-10-09T01:08:24.298826468Z" level=info msg="StartContainer for \"8fda4de66b04bac2c803f3023a977dd99eb31fa3e2349df7b1e5239006faa69a\"" Oct 9 01:08:24.301335 kubelet[2246]: E1009 01:08:24.301260 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="1.6s" Oct 9 01:08:24.302693 containerd[1441]: time="2024-10-09T01:08:24.302609561Z" level=info msg="CreateContainer within sandbox \"21f8f7455f720ccf343e491ec8d49cb770071a3e5b5ae40c59705d7c0dd0f3d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f200700cf990ee68aaa732fd333528cfac4f3720d03ff972164708957822f4c6\"" Oct 9 01:08:24.303273 containerd[1441]: time="2024-10-09T01:08:24.303221558Z" level=info msg="StartContainer for \"f200700cf990ee68aaa732fd333528cfac4f3720d03ff972164708957822f4c6\"" Oct 9 01:08:24.308417 containerd[1441]: time="2024-10-09T01:08:24.308315623Z" level=info msg="CreateContainer within sandbox \"52556e474892004dcf70fdfe53ca50cdcda1fbdf1c7170158095cb6fc2011747\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9397b027a93015cca7e15fdda259ca24459f65185e319c5e921e99b28dc3e173\"" Oct 9 01:08:24.309415 containerd[1441]: time="2024-10-09T01:08:24.308701607Z" level=info msg="StartContainer for \"9397b027a93015cca7e15fdda259ca24459f65185e319c5e921e99b28dc3e173\"" Oct 9 01:08:24.326268 systemd[1]: Started cri-containerd-8fda4de66b04bac2c803f3023a977dd99eb31fa3e2349df7b1e5239006faa69a.scope - libcontainer container 8fda4de66b04bac2c803f3023a977dd99eb31fa3e2349df7b1e5239006faa69a. Oct 9 01:08:24.329841 systemd[1]: Started cri-containerd-f200700cf990ee68aaa732fd333528cfac4f3720d03ff972164708957822f4c6.scope - libcontainer container f200700cf990ee68aaa732fd333528cfac4f3720d03ff972164708957822f4c6. Oct 9 01:08:24.342261 systemd[1]: Started cri-containerd-9397b027a93015cca7e15fdda259ca24459f65185e319c5e921e99b28dc3e173.scope - libcontainer container 9397b027a93015cca7e15fdda259ca24459f65185e319c5e921e99b28dc3e173. Oct 9 01:08:24.381280 containerd[1441]: time="2024-10-09T01:08:24.381223974Z" level=info msg="StartContainer for \"9397b027a93015cca7e15fdda259ca24459f65185e319c5e921e99b28dc3e173\" returns successfully" Oct 9 01:08:24.381494 containerd[1441]: time="2024-10-09T01:08:24.381294890Z" level=info msg="StartContainer for \"f200700cf990ee68aaa732fd333528cfac4f3720d03ff972164708957822f4c6\" returns successfully" Oct 9 01:08:24.381494 containerd[1441]: time="2024-10-09T01:08:24.381299884Z" level=info msg="StartContainer for \"8fda4de66b04bac2c803f3023a977dd99eb31fa3e2349df7b1e5239006faa69a\" returns successfully" Oct 9 01:08:24.415515 kubelet[2246]: I1009 01:08:24.414587 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:08:24.415515 kubelet[2246]: E1009 01:08:24.414926 2246 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost" Oct 9 01:08:24.436637 kubelet[2246]: W1009 01:08:24.429970 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:24.436637 kubelet[2246]: E1009 01:08:24.430034 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:24.477021 kubelet[2246]: W1009 01:08:24.473538 2246 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:24.477021 kubelet[2246]: E1009 01:08:24.473597 2246 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused Oct 9 01:08:24.920000 kubelet[2246]: E1009 01:08:24.919907 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:24.921506 kubelet[2246]: E1009 01:08:24.921484 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:24.923710 kubelet[2246]: E1009 01:08:24.923690 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:25.924870 kubelet[2246]: E1009 01:08:25.924792 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:26.017021 kubelet[2246]: I1009 01:08:26.016983 2246 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:08:26.137612 kubelet[2246]: E1009 01:08:26.137577 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 9 01:08:26.221616 kubelet[2246]: I1009 01:08:26.221509 2246 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:08:26.889229 kubelet[2246]: I1009 01:08:26.889171 2246 apiserver.go:52] "Watching apiserver" Oct 9 01:08:26.897726 kubelet[2246]: I1009 01:08:26.897685 2246 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:08:28.411134 kubelet[2246]: E1009 01:08:28.411105 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:28.873137 systemd[1]: Reloading requested from client PID 2527 ('systemctl') (unit session-7.scope)... Oct 9 01:08:28.873410 systemd[1]: Reloading... Oct 9 01:08:28.932451 kubelet[2246]: E1009 01:08:28.932424 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:28.940100 zram_generator::config[2567]: No configuration found. Oct 9 01:08:29.078086 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:08:29.139828 systemd[1]: Reloading finished in 266 ms. Oct 9 01:08:29.176639 kubelet[2246]: I1009 01:08:29.176595 2246 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:08:29.176680 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:29.194485 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:08:29.194695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:29.194735 systemd[1]: kubelet.service: Consumed 1.527s CPU time, 114.3M memory peak, 0B memory swap peak. Oct 9 01:08:29.205356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:08:29.292966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:08:29.297291 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:08:29.340565 kubelet[2608]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:08:29.340565 kubelet[2608]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:08:29.340565 kubelet[2608]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:08:29.340565 kubelet[2608]: I1009 01:08:29.340611 2608 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:08:29.344599 kubelet[2608]: I1009 01:08:29.344554 2608 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:08:29.344599 kubelet[2608]: I1009 01:08:29.344580 2608 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:08:29.345035 kubelet[2608]: I1009 01:08:29.344729 2608 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:08:29.346177 kubelet[2608]: I1009 01:08:29.346112 2608 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:08:29.347963 kubelet[2608]: I1009 01:08:29.347923 2608 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:08:29.354899 kubelet[2608]: I1009 01:08:29.354874 2608 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:08:29.355133 kubelet[2608]: I1009 01:08:29.355119 2608 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:08:29.355290 kubelet[2608]: I1009 01:08:29.355277 2608 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:08:29.355370 kubelet[2608]: I1009 01:08:29.355299 2608 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:08:29.355370 kubelet[2608]: I1009 01:08:29.355308 2608 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:08:29.355370 kubelet[2608]: I1009 01:08:29.355349 2608 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:08:29.355493 kubelet[2608]: I1009 01:08:29.355435 2608 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:08:29.355493 kubelet[2608]: I1009 01:08:29.355449 2608 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:08:29.355493 kubelet[2608]: I1009 01:08:29.355468 2608 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:08:29.355493 kubelet[2608]: I1009 01:08:29.355491 2608 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:08:29.358569 kubelet[2608]: I1009 01:08:29.358552 2608 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:08:29.364073 kubelet[2608]: I1009 01:08:29.361993 2608 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:08:29.364073 kubelet[2608]: I1009 01:08:29.362405 2608 server.go:1256] "Started kubelet" Oct 9 01:08:29.364073 kubelet[2608]: I1009 01:08:29.363462 2608 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:08:29.364073 kubelet[2608]: I1009 01:08:29.363756 2608 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:08:29.364073 kubelet[2608]: I1009 01:08:29.363834 2608 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:08:29.364722 kubelet[2608]: I1009 01:08:29.364701 2608 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:08:29.366783 kubelet[2608]: I1009 01:08:29.366757 2608 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:08:29.367712 kubelet[2608]: I1009 01:08:29.367257 2608 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:08:29.367712 kubelet[2608]: I1009 01:08:29.367366 2608 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:08:29.367712 kubelet[2608]: I1009 01:08:29.367488 2608 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:08:29.372066 kubelet[2608]: E1009 01:08:29.368735 2608 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:08:29.372066 kubelet[2608]: I1009 01:08:29.368787 2608 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:08:29.372242 kubelet[2608]: I1009 01:08:29.372216 2608 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:08:29.372242 kubelet[2608]: I1009 01:08:29.372239 2608 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:08:29.392977 kubelet[2608]: I1009 01:08:29.392899 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:08:29.394903 kubelet[2608]: I1009 01:08:29.394728 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:08:29.395822 kubelet[2608]: I1009 01:08:29.395594 2608 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:08:29.395822 kubelet[2608]: I1009 01:08:29.395628 2608 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:08:29.396515 kubelet[2608]: E1009 01:08:29.396481 2608 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:08:29.422038 kubelet[2608]: I1009 01:08:29.422015 2608 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:08:29.422038 kubelet[2608]: I1009 01:08:29.422034 2608 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:08:29.422160 kubelet[2608]: I1009 01:08:29.422051 2608 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:08:29.422216 kubelet[2608]: I1009 01:08:29.422204 2608 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:08:29.422251 kubelet[2608]: I1009 01:08:29.422228 2608 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:08:29.422251 kubelet[2608]: I1009 01:08:29.422235 2608 policy_none.go:49] "None policy: Start" Oct 9 01:08:29.422840 kubelet[2608]: I1009 01:08:29.422826 2608 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:08:29.422903 kubelet[2608]: I1009 01:08:29.422846 2608 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:08:29.422989 kubelet[2608]: I1009 01:08:29.422978 2608 state_mem.go:75] "Updated machine memory state" Oct 9 01:08:29.426322 kubelet[2608]: I1009 01:08:29.426298 2608 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:08:29.426595 kubelet[2608]: I1009 01:08:29.426516 2608 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:08:29.470201 kubelet[2608]: I1009 01:08:29.470178 2608 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Oct 9 01:08:29.478142 kubelet[2608]: I1009 01:08:29.478105 2608 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Oct 9 01:08:29.478223 kubelet[2608]: I1009 01:08:29.478181 2608 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Oct 9 01:08:29.497663 kubelet[2608]: I1009 01:08:29.497638 2608 topology_manager.go:215] "Topology Admit Handler" podUID="9c2a75196a0427827978d3f774846c88" podNamespace="kube-system" podName="kube-apiserver-localhost" Oct 9 01:08:29.497753 kubelet[2608]: I1009 01:08:29.497715 2608 topology_manager.go:215] "Topology Admit Handler" podUID="b21621a72929ad4d87bc59a877761c7f" podNamespace="kube-system" podName="kube-controller-manager-localhost" Oct 9 01:08:29.497795 kubelet[2608]: I1009 01:08:29.497780 2608 topology_manager.go:215] "Topology Admit Handler" podUID="f13040d390753ac4a1fef67bb9676230" podNamespace="kube-system" podName="kube-scheduler-localhost" Oct 9 01:08:29.503284 kubelet[2608]: E1009 01:08:29.502917 2608 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 9 01:08:29.569035 kubelet[2608]: I1009 01:08:29.569005 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:29.569110 kubelet[2608]: I1009 01:08:29.569045 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:29.569110 kubelet[2608]: I1009 01:08:29.569080 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c2a75196a0427827978d3f774846c88-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c2a75196a0427827978d3f774846c88\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:08:29.569110 kubelet[2608]: I1009 01:08:29.569102 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c2a75196a0427827978d3f774846c88-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9c2a75196a0427827978d3f774846c88\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:08:29.569195 kubelet[2608]: I1009 01:08:29.569120 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:29.569195 kubelet[2608]: I1009 01:08:29.569139 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13040d390753ac4a1fef67bb9676230-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f13040d390753ac4a1fef67bb9676230\") " pod="kube-system/kube-scheduler-localhost" Oct 9 01:08:29.569195 kubelet[2608]: I1009 01:08:29.569157 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c2a75196a0427827978d3f774846c88-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c2a75196a0427827978d3f774846c88\") " pod="kube-system/kube-apiserver-localhost" Oct 9 01:08:29.569195 kubelet[2608]: I1009 01:08:29.569175 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:29.569195 kubelet[2608]: I1009 01:08:29.569196 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b21621a72929ad4d87bc59a877761c7f-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b21621a72929ad4d87bc59a877761c7f\") " pod="kube-system/kube-controller-manager-localhost" Oct 9 01:08:29.803887 kubelet[2608]: E1009 01:08:29.803789 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:29.803986 kubelet[2608]: E1009 01:08:29.803895 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:29.804271 kubelet[2608]: E1009 01:08:29.804252 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:30.357622 kubelet[2608]: I1009 01:08:30.357577 2608 apiserver.go:52] "Watching apiserver" Oct 9 01:08:30.368194 kubelet[2608]: I1009 01:08:30.368145 2608 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:08:30.407330 kubelet[2608]: E1009 01:08:30.407042 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:30.407330 kubelet[2608]: E1009 01:08:30.407162 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:30.414734 kubelet[2608]: E1009 01:08:30.414655 2608 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 9 01:08:30.415142 kubelet[2608]: E1009 01:08:30.415120 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:30.433466 kubelet[2608]: I1009 01:08:30.433425 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.433365547 podStartE2EDuration="1.433365547s" podCreationTimestamp="2024-10-09 01:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:08:30.430999762 +0000 UTC m=+1.130467207" watchObservedRunningTime="2024-10-09 01:08:30.433365547 +0000 UTC m=+1.132832952" Oct 9 01:08:30.452914 kubelet[2608]: I1009 01:08:30.452876 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.452840256 podStartE2EDuration="2.452840256s" podCreationTimestamp="2024-10-09 01:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:08:30.452706761 +0000 UTC m=+1.152174206" watchObservedRunningTime="2024-10-09 01:08:30.452840256 +0000 UTC m=+1.152307701" Oct 9 01:08:30.453052 kubelet[2608]: I1009 01:08:30.452979 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.452963067 podStartE2EDuration="1.452963067s" podCreationTimestamp="2024-10-09 01:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:08:30.441894739 +0000 UTC m=+1.141362184" watchObservedRunningTime="2024-10-09 01:08:30.452963067 +0000 UTC m=+1.152430512" Oct 9 01:08:31.408558 kubelet[2608]: E1009 01:08:31.408531 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:32.818662 kubelet[2608]: E1009 01:08:32.818585 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:33.453988 sudo[1621]: pam_unix(sudo:session): session closed for user root Oct 9 01:08:33.456022 sshd[1618]: pam_unix(sshd:session): session closed for user core Oct 9 01:08:33.458756 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:34556.service: Deactivated successfully. Oct 9 01:08:33.460378 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:08:33.460580 systemd[1]: session-7.scope: Consumed 7.149s CPU time, 189.2M memory peak, 0B memory swap peak. Oct 9 01:08:33.461598 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:08:33.462809 systemd-logind[1421]: Removed session 7. Oct 9 01:08:34.978017 kubelet[2608]: E1009 01:08:34.977954 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:35.415884 kubelet[2608]: E1009 01:08:35.415818 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:38.652129 kubelet[2608]: E1009 01:08:38.651954 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:39.421265 kubelet[2608]: E1009 01:08:39.421197 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:41.133644 update_engine[1425]: I20241009 01:08:41.133579 1425 update_attempter.cc:509] Updating boot flags... Oct 9 01:08:41.156099 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2704) Oct 9 01:08:41.195080 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2706) Oct 9 01:08:42.835441 kubelet[2608]: E1009 01:08:42.835400 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:44.130361 kubelet[2608]: I1009 01:08:44.130313 2608 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:08:44.135531 containerd[1441]: time="2024-10-09T01:08:44.135483990Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:08:44.136429 kubelet[2608]: I1009 01:08:44.135668 2608 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:08:45.114620 kubelet[2608]: I1009 01:08:45.114574 2608 topology_manager.go:215] "Topology Admit Handler" podUID="7dca2522-09eb-4316-84cf-20fcd34f32d2" podNamespace="kube-system" podName="kube-proxy-ktksj" Oct 9 01:08:45.121989 systemd[1]: Created slice kubepods-besteffort-pod7dca2522_09eb_4316_84cf_20fcd34f32d2.slice - libcontainer container kubepods-besteffort-pod7dca2522_09eb_4316_84cf_20fcd34f32d2.slice. Oct 9 01:08:45.175376 kubelet[2608]: I1009 01:08:45.175335 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dca2522-09eb-4316-84cf-20fcd34f32d2-xtables-lock\") pod \"kube-proxy-ktksj\" (UID: \"7dca2522-09eb-4316-84cf-20fcd34f32d2\") " pod="kube-system/kube-proxy-ktksj" Oct 9 01:08:45.175773 kubelet[2608]: I1009 01:08:45.175389 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7dca2522-09eb-4316-84cf-20fcd34f32d2-kube-proxy\") pod \"kube-proxy-ktksj\" (UID: \"7dca2522-09eb-4316-84cf-20fcd34f32d2\") " pod="kube-system/kube-proxy-ktksj" Oct 9 01:08:45.175773 kubelet[2608]: I1009 01:08:45.175414 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m44j\" (UniqueName: \"kubernetes.io/projected/7dca2522-09eb-4316-84cf-20fcd34f32d2-kube-api-access-4m44j\") pod \"kube-proxy-ktksj\" (UID: \"7dca2522-09eb-4316-84cf-20fcd34f32d2\") " pod="kube-system/kube-proxy-ktksj" Oct 9 01:08:45.175773 kubelet[2608]: I1009 01:08:45.175452 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dca2522-09eb-4316-84cf-20fcd34f32d2-lib-modules\") pod \"kube-proxy-ktksj\" (UID: \"7dca2522-09eb-4316-84cf-20fcd34f32d2\") " pod="kube-system/kube-proxy-ktksj" Oct 9 01:08:45.231303 kubelet[2608]: I1009 01:08:45.231264 2608 topology_manager.go:215] "Topology Admit Handler" podUID="95d2955c-6253-461e-8ee4-e48febe31565" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-vk5v6" Oct 9 01:08:45.237371 systemd[1]: Created slice kubepods-besteffort-pod95d2955c_6253_461e_8ee4_e48febe31565.slice - libcontainer container kubepods-besteffort-pod95d2955c_6253_461e_8ee4_e48febe31565.slice. Oct 9 01:08:45.275677 kubelet[2608]: I1009 01:08:45.275640 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95d2955c-6253-461e-8ee4-e48febe31565-var-lib-calico\") pod \"tigera-operator-5d56685c77-vk5v6\" (UID: \"95d2955c-6253-461e-8ee4-e48febe31565\") " pod="tigera-operator/tigera-operator-5d56685c77-vk5v6" Oct 9 01:08:45.275782 kubelet[2608]: I1009 01:08:45.275714 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn9gt\" (UniqueName: \"kubernetes.io/projected/95d2955c-6253-461e-8ee4-e48febe31565-kube-api-access-nn9gt\") pod \"tigera-operator-5d56685c77-vk5v6\" (UID: \"95d2955c-6253-461e-8ee4-e48febe31565\") " pod="tigera-operator/tigera-operator-5d56685c77-vk5v6" Oct 9 01:08:45.428501 kubelet[2608]: E1009 01:08:45.428406 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:45.429594 containerd[1441]: time="2024-10-09T01:08:45.429546899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktksj,Uid:7dca2522-09eb-4316-84cf-20fcd34f32d2,Namespace:kube-system,Attempt:0,}" Oct 9 01:08:45.451494 containerd[1441]: time="2024-10-09T01:08:45.451291878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:45.451494 containerd[1441]: time="2024-10-09T01:08:45.451342607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:45.451494 containerd[1441]: time="2024-10-09T01:08:45.451352929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:45.451494 containerd[1441]: time="2024-10-09T01:08:45.451422383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:45.471210 systemd[1]: Started cri-containerd-4b6adffb5eae9b4fe6a34857fd8123661479a928592245b21d2fefa5beea567a.scope - libcontainer container 4b6adffb5eae9b4fe6a34857fd8123661479a928592245b21d2fefa5beea567a. Oct 9 01:08:45.490552 containerd[1441]: time="2024-10-09T01:08:45.490516824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktksj,Uid:7dca2522-09eb-4316-84cf-20fcd34f32d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b6adffb5eae9b4fe6a34857fd8123661479a928592245b21d2fefa5beea567a\"" Oct 9 01:08:45.493792 kubelet[2608]: E1009 01:08:45.493766 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:45.496502 containerd[1441]: time="2024-10-09T01:08:45.496472038Z" level=info msg="CreateContainer within sandbox \"4b6adffb5eae9b4fe6a34857fd8123661479a928592245b21d2fefa5beea567a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:08:45.537140 containerd[1441]: time="2024-10-09T01:08:45.537005993Z" level=info msg="CreateContainer within sandbox \"4b6adffb5eae9b4fe6a34857fd8123661479a928592245b21d2fefa5beea567a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"296ffd9e0d9963a701428090f45fd9952116796d2d57f2410fc53b685878f393\"" Oct 9 01:08:45.537729 containerd[1441]: time="2024-10-09T01:08:45.537698805Z" level=info msg="StartContainer for \"296ffd9e0d9963a701428090f45fd9952116796d2d57f2410fc53b685878f393\"" Oct 9 01:08:45.540653 containerd[1441]: time="2024-10-09T01:08:45.540398799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-vk5v6,Uid:95d2955c-6253-461e-8ee4-e48febe31565,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:08:45.559820 containerd[1441]: time="2024-10-09T01:08:45.558284163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:45.559820 containerd[1441]: time="2024-10-09T01:08:45.558351496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:45.559820 containerd[1441]: time="2024-10-09T01:08:45.558375461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:45.559820 containerd[1441]: time="2024-10-09T01:08:45.558451435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:45.563210 systemd[1]: Started cri-containerd-296ffd9e0d9963a701428090f45fd9952116796d2d57f2410fc53b685878f393.scope - libcontainer container 296ffd9e0d9963a701428090f45fd9952116796d2d57f2410fc53b685878f393. Oct 9 01:08:45.584333 systemd[1]: Started cri-containerd-9f397517c69b7d43fac7070afef3e681605c2dd7f407dc9ace7965f5f3a54b9a.scope - libcontainer container 9f397517c69b7d43fac7070afef3e681605c2dd7f407dc9ace7965f5f3a54b9a. Oct 9 01:08:45.606494 containerd[1441]: time="2024-10-09T01:08:45.606440010Z" level=info msg="StartContainer for \"296ffd9e0d9963a701428090f45fd9952116796d2d57f2410fc53b685878f393\" returns successfully" Oct 9 01:08:45.619581 containerd[1441]: time="2024-10-09T01:08:45.619544304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-vk5v6,Uid:95d2955c-6253-461e-8ee4-e48febe31565,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9f397517c69b7d43fac7070afef3e681605c2dd7f407dc9ace7965f5f3a54b9a\"" Oct 9 01:08:45.626774 containerd[1441]: time="2024-10-09T01:08:45.626741074Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:08:46.431815 kubelet[2608]: E1009 01:08:46.431786 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:46.544892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013435693.mount: Deactivated successfully. Oct 9 01:08:46.905122 containerd[1441]: time="2024-10-09T01:08:46.904870697Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:46.905773 containerd[1441]: time="2024-10-09T01:08:46.905548501Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485947" Oct 9 01:08:46.906681 containerd[1441]: time="2024-10-09T01:08:46.906470468Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:46.908630 containerd[1441]: time="2024-10-09T01:08:46.908602856Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:46.909542 containerd[1441]: time="2024-10-09T01:08:46.909509940Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.2827307s" Oct 9 01:08:46.909542 containerd[1441]: time="2024-10-09T01:08:46.909539866Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 9 01:08:46.924387 containerd[1441]: time="2024-10-09T01:08:46.924351677Z" level=info msg="CreateContainer within sandbox \"9f397517c69b7d43fac7070afef3e681605c2dd7f407dc9ace7965f5f3a54b9a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:08:46.942703 containerd[1441]: time="2024-10-09T01:08:46.942659923Z" level=info msg="CreateContainer within sandbox \"9f397517c69b7d43fac7070afef3e681605c2dd7f407dc9ace7965f5f3a54b9a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fbc77fee8a9b8726ffe0fcab9f6641358399ce82ec0122ad11683ed9975a49fc\"" Oct 9 01:08:46.944096 containerd[1441]: time="2024-10-09T01:08:46.943225186Z" level=info msg="StartContainer for \"fbc77fee8a9b8726ffe0fcab9f6641358399ce82ec0122ad11683ed9975a49fc\"" Oct 9 01:08:46.972217 systemd[1]: Started cri-containerd-fbc77fee8a9b8726ffe0fcab9f6641358399ce82ec0122ad11683ed9975a49fc.scope - libcontainer container fbc77fee8a9b8726ffe0fcab9f6641358399ce82ec0122ad11683ed9975a49fc. Oct 9 01:08:47.017008 containerd[1441]: time="2024-10-09T01:08:47.016966775Z" level=info msg="StartContainer for \"fbc77fee8a9b8726ffe0fcab9f6641358399ce82ec0122ad11683ed9975a49fc\" returns successfully" Oct 9 01:08:47.449976 kubelet[2608]: I1009 01:08:47.449924 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ktksj" podStartSLOduration=2.449886481 podStartE2EDuration="2.449886481s" podCreationTimestamp="2024-10-09 01:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:08:46.441235859 +0000 UTC m=+17.140703304" watchObservedRunningTime="2024-10-09 01:08:47.449886481 +0000 UTC m=+18.149353926" Oct 9 01:08:50.366917 kubelet[2608]: I1009 01:08:50.366875 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-vk5v6" podStartSLOduration=4.076875754 podStartE2EDuration="5.366833978s" podCreationTimestamp="2024-10-09 01:08:45 +0000 UTC" firstStartedPulling="2024-10-09 01:08:45.620767577 +0000 UTC m=+16.320234982" lastFinishedPulling="2024-10-09 01:08:46.910725761 +0000 UTC m=+17.610193206" observedRunningTime="2024-10-09 01:08:47.452141592 +0000 UTC m=+18.151609037" watchObservedRunningTime="2024-10-09 01:08:50.366833978 +0000 UTC m=+21.066301423" Oct 9 01:08:50.367602 kubelet[2608]: I1009 01:08:50.367495 2608 topology_manager.go:215] "Topology Admit Handler" podUID="a1816586-6df3-4eaf-8384-b30cb280e51a" podNamespace="calico-system" podName="calico-typha-59595ccb94-kw2fk" Oct 9 01:08:50.379551 systemd[1]: Created slice kubepods-besteffort-poda1816586_6df3_4eaf_8384_b30cb280e51a.slice - libcontainer container kubepods-besteffort-poda1816586_6df3_4eaf_8384_b30cb280e51a.slice. Oct 9 01:08:50.405750 kubelet[2608]: I1009 01:08:50.405702 2608 topology_manager.go:215] "Topology Admit Handler" podUID="c18aa682-ded7-4d92-88df-b42cc3f139da" podNamespace="calico-system" podName="calico-node-gr8dp" Oct 9 01:08:50.407256 kubelet[2608]: W1009 01:08:50.407220 2608 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Oct 9 01:08:50.407444 kubelet[2608]: W1009 01:08:50.407333 2608 reflector.go:539] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Oct 9 01:08:50.409523 kubelet[2608]: E1009 01:08:50.409497 2608 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Oct 9 01:08:50.410188 kubelet[2608]: E1009 01:08:50.410094 2608 reflector.go:147] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Oct 9 01:08:50.413322 systemd[1]: Created slice kubepods-besteffort-podc18aa682_ded7_4d92_88df_b42cc3f139da.slice - libcontainer container kubepods-besteffort-podc18aa682_ded7_4d92_88df_b42cc3f139da.slice. Oct 9 01:08:50.513876 kubelet[2608]: I1009 01:08:50.513826 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-lib-modules\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.513876 kubelet[2608]: I1009 01:08:50.513879 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c18aa682-ded7-4d92-88df-b42cc3f139da-node-certs\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514028 kubelet[2608]: I1009 01:08:50.513904 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-var-lib-calico\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514028 kubelet[2608]: I1009 01:08:50.513967 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-cni-net-dir\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514093 kubelet[2608]: I1009 01:08:50.514072 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgck\" (UniqueName: \"kubernetes.io/projected/a1816586-6df3-4eaf-8384-b30cb280e51a-kube-api-access-ctgck\") pod \"calico-typha-59595ccb94-kw2fk\" (UID: \"a1816586-6df3-4eaf-8384-b30cb280e51a\") " pod="calico-system/calico-typha-59595ccb94-kw2fk" Oct 9 01:08:50.514118 kubelet[2608]: I1009 01:08:50.514110 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-policysync\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514144 kubelet[2608]: I1009 01:08:50.514133 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c18aa682-ded7-4d92-88df-b42cc3f139da-tigera-ca-bundle\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514535 kubelet[2608]: I1009 01:08:50.514170 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a1816586-6df3-4eaf-8384-b30cb280e51a-typha-certs\") pod \"calico-typha-59595ccb94-kw2fk\" (UID: \"a1816586-6df3-4eaf-8384-b30cb280e51a\") " pod="calico-system/calico-typha-59595ccb94-kw2fk" Oct 9 01:08:50.514535 kubelet[2608]: I1009 01:08:50.514207 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-flexvol-driver-host\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514535 kubelet[2608]: I1009 01:08:50.514249 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1816586-6df3-4eaf-8384-b30cb280e51a-tigera-ca-bundle\") pod \"calico-typha-59595ccb94-kw2fk\" (UID: \"a1816586-6df3-4eaf-8384-b30cb280e51a\") " pod="calico-system/calico-typha-59595ccb94-kw2fk" Oct 9 01:08:50.514535 kubelet[2608]: I1009 01:08:50.514291 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-xtables-lock\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514535 kubelet[2608]: I1009 01:08:50.514326 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx8ct\" (UniqueName: \"kubernetes.io/projected/c18aa682-ded7-4d92-88df-b42cc3f139da-kube-api-access-kx8ct\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514679 kubelet[2608]: I1009 01:08:50.514378 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-var-run-calico\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514679 kubelet[2608]: I1009 01:08:50.514447 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-cni-bin-dir\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.514679 kubelet[2608]: I1009 01:08:50.514520 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c18aa682-ded7-4d92-88df-b42cc3f139da-cni-log-dir\") pod \"calico-node-gr8dp\" (UID: \"c18aa682-ded7-4d92-88df-b42cc3f139da\") " pod="calico-system/calico-node-gr8dp" Oct 9 01:08:50.520400 kubelet[2608]: I1009 01:08:50.520161 2608 topology_manager.go:215] "Topology Admit Handler" podUID="171922d5-d611-4e67-8c10-daef097d9ad9" podNamespace="calico-system" podName="csi-node-driver-w4q2j" Oct 9 01:08:50.521130 kubelet[2608]: E1009 01:08:50.521105 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4q2j" podUID="171922d5-d611-4e67-8c10-daef097d9ad9" Oct 9 01:08:50.619383 kubelet[2608]: E1009 01:08:50.618081 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.619383 kubelet[2608]: W1009 01:08:50.618116 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.619383 kubelet[2608]: E1009 01:08:50.619228 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.619749 kubelet[2608]: E1009 01:08:50.619581 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.619749 kubelet[2608]: W1009 01:08:50.619599 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.619749 kubelet[2608]: E1009 01:08:50.619645 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.620441 kubelet[2608]: E1009 01:08:50.620423 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.620939 kubelet[2608]: W1009 01:08:50.620591 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.620939 kubelet[2608]: E1009 01:08:50.620653 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.621964 kubelet[2608]: E1009 01:08:50.621432 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.621964 kubelet[2608]: W1009 01:08:50.621446 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.622215 kubelet[2608]: E1009 01:08:50.622123 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.622323 kubelet[2608]: E1009 01:08:50.622308 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.622381 kubelet[2608]: W1009 01:08:50.622370 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.622508 kubelet[2608]: E1009 01:08:50.622456 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.622700 kubelet[2608]: E1009 01:08:50.622617 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.622700 kubelet[2608]: W1009 01:08:50.622629 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.622700 kubelet[2608]: E1009 01:08:50.622683 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.622844 kubelet[2608]: E1009 01:08:50.622832 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.622910 kubelet[2608]: W1009 01:08:50.622898 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.623043 kubelet[2608]: E1009 01:08:50.623020 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.623255 kubelet[2608]: E1009 01:08:50.623166 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.623255 kubelet[2608]: W1009 01:08:50.623178 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.623341 kubelet[2608]: E1009 01:08:50.623286 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.623421 kubelet[2608]: E1009 01:08:50.623409 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.623478 kubelet[2608]: W1009 01:08:50.623467 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.623549 kubelet[2608]: E1009 01:08:50.623538 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.623787 kubelet[2608]: E1009 01:08:50.623773 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.623971 kubelet[2608]: W1009 01:08:50.623856 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.623971 kubelet[2608]: E1009 01:08:50.623880 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.624122 kubelet[2608]: E1009 01:08:50.624109 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.624173 kubelet[2608]: W1009 01:08:50.624163 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.624321 kubelet[2608]: E1009 01:08:50.624299 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.624502 kubelet[2608]: E1009 01:08:50.624488 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.624634 kubelet[2608]: W1009 01:08:50.624558 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.624828 kubelet[2608]: E1009 01:08:50.624747 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.624828 kubelet[2608]: W1009 01:08:50.624759 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.624905 kubelet[2608]: E1009 01:08:50.624874 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.624940 kubelet[2608]: E1009 01:08:50.624908 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.625021 kubelet[2608]: E1009 01:08:50.625008 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.625092 kubelet[2608]: W1009 01:08:50.625079 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.625288 kubelet[2608]: E1009 01:08:50.625275 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.625425 kubelet[2608]: W1009 01:08:50.625349 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.625518 kubelet[2608]: E1009 01:08:50.625506 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.625576 kubelet[2608]: W1009 01:08:50.625565 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.625749 kubelet[2608]: E1009 01:08:50.625724 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.625792 kubelet[2608]: E1009 01:08:50.625757 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.625792 kubelet[2608]: E1009 01:08:50.625773 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.626021 kubelet[2608]: E1009 01:08:50.625872 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.626021 kubelet[2608]: W1009 01:08:50.625884 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.626021 kubelet[2608]: E1009 01:08:50.625901 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.628824 kubelet[2608]: E1009 01:08:50.628805 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.628930 kubelet[2608]: W1009 01:08:50.628915 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.629049 kubelet[2608]: E1009 01:08:50.629014 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.629261 kubelet[2608]: E1009 01:08:50.629166 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.629261 kubelet[2608]: W1009 01:08:50.629177 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.629261 kubelet[2608]: E1009 01:08:50.629210 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.629506 kubelet[2608]: E1009 01:08:50.629493 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.629577 kubelet[2608]: W1009 01:08:50.629564 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.629908 kubelet[2608]: E1009 01:08:50.629784 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.629908 kubelet[2608]: E1009 01:08:50.629828 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.629908 kubelet[2608]: W1009 01:08:50.629834 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.630047 kubelet[2608]: E1009 01:08:50.630020 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.630195 kubelet[2608]: E1009 01:08:50.630180 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.630268 kubelet[2608]: W1009 01:08:50.630255 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.630394 kubelet[2608]: E1009 01:08:50.630383 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.630858 kubelet[2608]: E1009 01:08:50.630756 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.630858 kubelet[2608]: W1009 01:08:50.630769 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.630858 kubelet[2608]: E1009 01:08:50.630829 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.631405 kubelet[2608]: E1009 01:08:50.631388 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.631489 kubelet[2608]: W1009 01:08:50.631475 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.631965 kubelet[2608]: E1009 01:08:50.631942 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.633333 kubelet[2608]: E1009 01:08:50.633311 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.633333 kubelet[2608]: W1009 01:08:50.633328 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.633663 kubelet[2608]: E1009 01:08:50.633412 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.633707 kubelet[2608]: E1009 01:08:50.633534 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.633897 kubelet[2608]: W1009 01:08:50.633874 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.633961 kubelet[2608]: E1009 01:08:50.633935 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.634820 kubelet[2608]: E1009 01:08:50.634796 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.634820 kubelet[2608]: W1009 01:08:50.634812 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.634909 kubelet[2608]: E1009 01:08:50.634879 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.639412 kubelet[2608]: E1009 01:08:50.639381 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.639412 kubelet[2608]: W1009 01:08:50.639404 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.639506 kubelet[2608]: E1009 01:08:50.639459 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.640938 kubelet[2608]: E1009 01:08:50.640903 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.640938 kubelet[2608]: W1009 01:08:50.640924 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.641017 kubelet[2608]: E1009 01:08:50.640961 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.641227 kubelet[2608]: E1009 01:08:50.641200 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.641227 kubelet[2608]: W1009 01:08:50.641220 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.641421 kubelet[2608]: E1009 01:08:50.641394 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.642008 kubelet[2608]: E1009 01:08:50.641977 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.642049 kubelet[2608]: W1009 01:08:50.642016 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.642112 kubelet[2608]: E1009 01:08:50.642099 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.642288 kubelet[2608]: E1009 01:08:50.642274 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.642288 kubelet[2608]: W1009 01:08:50.642286 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.642381 kubelet[2608]: E1009 01:08:50.642364 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.642540 kubelet[2608]: E1009 01:08:50.642529 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.642561 kubelet[2608]: W1009 01:08:50.642541 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.642590 kubelet[2608]: E1009 01:08:50.642579 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.642743 kubelet[2608]: E1009 01:08:50.642731 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.642768 kubelet[2608]: W1009 01:08:50.642743 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.642788 kubelet[2608]: E1009 01:08:50.642779 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.642924 kubelet[2608]: E1009 01:08:50.642912 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.642924 kubelet[2608]: W1009 01:08:50.642923 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.643019 kubelet[2608]: E1009 01:08:50.642946 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.643130 kubelet[2608]: E1009 01:08:50.643118 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.643159 kubelet[2608]: W1009 01:08:50.643130 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.643159 kubelet[2608]: E1009 01:08:50.643154 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.643297 kubelet[2608]: E1009 01:08:50.643287 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.643317 kubelet[2608]: W1009 01:08:50.643298 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.643351 kubelet[2608]: E1009 01:08:50.643320 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.643456 kubelet[2608]: E1009 01:08:50.643446 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.643482 kubelet[2608]: W1009 01:08:50.643457 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.643508 kubelet[2608]: E1009 01:08:50.643480 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.643604 kubelet[2608]: E1009 01:08:50.643594 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.643625 kubelet[2608]: W1009 01:08:50.643604 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.643648 kubelet[2608]: E1009 01:08:50.643626 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.643781 kubelet[2608]: E1009 01:08:50.643769 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.643807 kubelet[2608]: W1009 01:08:50.643782 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.643827 kubelet[2608]: E1009 01:08:50.643807 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.643939 kubelet[2608]: E1009 01:08:50.643929 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.643964 kubelet[2608]: W1009 01:08:50.643939 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.644017 kubelet[2608]: E1009 01:08:50.644007 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.644127 kubelet[2608]: E1009 01:08:50.644113 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.644166 kubelet[2608]: W1009 01:08:50.644125 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.644224 kubelet[2608]: E1009 01:08:50.644213 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.644401 kubelet[2608]: E1009 01:08:50.644387 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.644401 kubelet[2608]: W1009 01:08:50.644398 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.644458 kubelet[2608]: E1009 01:08:50.644415 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.644589 kubelet[2608]: E1009 01:08:50.644577 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.644614 kubelet[2608]: W1009 01:08:50.644589 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.644640 kubelet[2608]: E1009 01:08:50.644630 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.644762 kubelet[2608]: E1009 01:08:50.644752 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.644790 kubelet[2608]: W1009 01:08:50.644762 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.644844 kubelet[2608]: E1009 01:08:50.644834 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.644970 kubelet[2608]: E1009 01:08:50.644961 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.644997 kubelet[2608]: W1009 01:08:50.644971 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.645020 kubelet[2608]: E1009 01:08:50.645015 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.645444 kubelet[2608]: E1009 01:08:50.645426 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.645478 kubelet[2608]: W1009 01:08:50.645443 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.645621 kubelet[2608]: E1009 01:08:50.645603 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.645684 kubelet[2608]: E1009 01:08:50.645673 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.645712 kubelet[2608]: W1009 01:08:50.645683 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.645789 kubelet[2608]: E1009 01:08:50.645776 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.646316 kubelet[2608]: E1009 01:08:50.646293 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.646359 kubelet[2608]: W1009 01:08:50.646317 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.646741 kubelet[2608]: E1009 01:08:50.646718 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.647114 kubelet[2608]: E1009 01:08:50.647097 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.647143 kubelet[2608]: W1009 01:08:50.647114 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.647781 kubelet[2608]: E1009 01:08:50.647702 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.647861 kubelet[2608]: E1009 01:08:50.647834 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.647904 kubelet[2608]: W1009 01:08:50.647863 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.647929 kubelet[2608]: E1009 01:08:50.647904 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.648105 kubelet[2608]: E1009 01:08:50.648089 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.648105 kubelet[2608]: W1009 01:08:50.648104 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.648171 kubelet[2608]: E1009 01:08:50.648146 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.648353 kubelet[2608]: E1009 01:08:50.648336 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.648353 kubelet[2608]: W1009 01:08:50.648352 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.648409 kubelet[2608]: E1009 01:08:50.648388 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.648638 kubelet[2608]: E1009 01:08:50.648600 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.648638 kubelet[2608]: W1009 01:08:50.648637 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.648843 kubelet[2608]: E1009 01:08:50.648827 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.649195 kubelet[2608]: E1009 01:08:50.649178 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.649195 kubelet[2608]: W1009 01:08:50.649194 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.649263 kubelet[2608]: E1009 01:08:50.649212 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.649492 kubelet[2608]: E1009 01:08:50.649475 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.649492 kubelet[2608]: W1009 01:08:50.649489 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.649596 kubelet[2608]: E1009 01:08:50.649574 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.649770 kubelet[2608]: E1009 01:08:50.649740 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.649770 kubelet[2608]: W1009 01:08:50.649753 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.649836 kubelet[2608]: E1009 01:08:50.649818 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.650003 kubelet[2608]: E1009 01:08:50.649988 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.650003 kubelet[2608]: W1009 01:08:50.650001 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.650112 kubelet[2608]: E1009 01:08:50.650043 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.650505 kubelet[2608]: E1009 01:08:50.650475 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.650571 kubelet[2608]: W1009 01:08:50.650496 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.650833 kubelet[2608]: E1009 01:08:50.650764 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.651628 kubelet[2608]: E1009 01:08:50.651088 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.651628 kubelet[2608]: W1009 01:08:50.651104 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.651628 kubelet[2608]: E1009 01:08:50.651170 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.651628 kubelet[2608]: E1009 01:08:50.651441 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.651628 kubelet[2608]: W1009 01:08:50.651453 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.651628 kubelet[2608]: E1009 01:08:50.651493 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.651801 kubelet[2608]: E1009 01:08:50.651658 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.651801 kubelet[2608]: W1009 01:08:50.651669 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.651801 kubelet[2608]: E1009 01:08:50.651682 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.653412 kubelet[2608]: E1009 01:08:50.653392 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.653474 kubelet[2608]: W1009 01:08:50.653423 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.653583 kubelet[2608]: E1009 01:08:50.653545 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.653731 kubelet[2608]: E1009 01:08:50.653715 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.653731 kubelet[2608]: W1009 01:08:50.653730 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.653885 kubelet[2608]: E1009 01:08:50.653830 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.654044 kubelet[2608]: E1009 01:08:50.654031 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.654120 kubelet[2608]: W1009 01:08:50.654103 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.654143 kubelet[2608]: E1009 01:08:50.654129 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.654336 kubelet[2608]: E1009 01:08:50.654320 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.654336 kubelet[2608]: W1009 01:08:50.654335 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.654403 kubelet[2608]: E1009 01:08:50.654347 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.654563 kubelet[2608]: E1009 01:08:50.654552 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.654595 kubelet[2608]: W1009 01:08:50.654567 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.654595 kubelet[2608]: E1009 01:08:50.654578 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.654750 kubelet[2608]: E1009 01:08:50.654739 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.654750 kubelet[2608]: W1009 01:08:50.654749 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.654809 kubelet[2608]: E1009 01:08:50.654759 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.654956 kubelet[2608]: E1009 01:08:50.654946 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.654956 kubelet[2608]: W1009 01:08:50.654956 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.655037 kubelet[2608]: E1009 01:08:50.654974 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.660884 kubelet[2608]: E1009 01:08:50.660811 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.660884 kubelet[2608]: W1009 01:08:50.660829 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.660884 kubelet[2608]: E1009 01:08:50.660846 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.685586 kubelet[2608]: E1009 01:08:50.685538 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:50.687108 containerd[1441]: time="2024-10-09T01:08:50.687069544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59595ccb94-kw2fk,Uid:a1816586-6df3-4eaf-8384-b30cb280e51a,Namespace:calico-system,Attempt:0,}" Oct 9 01:08:50.743872 containerd[1441]: time="2024-10-09T01:08:50.743771811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:50.743872 containerd[1441]: time="2024-10-09T01:08:50.743821539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:50.743872 containerd[1441]: time="2024-10-09T01:08:50.743832301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:50.744093 containerd[1441]: time="2024-10-09T01:08:50.743912153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:50.756848 kubelet[2608]: E1009 01:08:50.756769 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.756848 kubelet[2608]: W1009 01:08:50.756790 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.756848 kubelet[2608]: E1009 01:08:50.756811 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.756848 kubelet[2608]: I1009 01:08:50.756841 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/171922d5-d611-4e67-8c10-daef097d9ad9-registration-dir\") pod \"csi-node-driver-w4q2j\" (UID: \"171922d5-d611-4e67-8c10-daef097d9ad9\") " pod="calico-system/csi-node-driver-w4q2j" Oct 9 01:08:50.757364 kubelet[2608]: E1009 01:08:50.757029 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.757364 kubelet[2608]: W1009 01:08:50.757039 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.757364 kubelet[2608]: E1009 01:08:50.757070 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.757364 kubelet[2608]: I1009 01:08:50.757095 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lb8r\" (UniqueName: \"kubernetes.io/projected/171922d5-d611-4e67-8c10-daef097d9ad9-kube-api-access-9lb8r\") pod \"csi-node-driver-w4q2j\" (UID: \"171922d5-d611-4e67-8c10-daef097d9ad9\") " pod="calico-system/csi-node-driver-w4q2j" Oct 9 01:08:50.757364 kubelet[2608]: E1009 01:08:50.757324 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.757364 kubelet[2608]: W1009 01:08:50.757338 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.757364 kubelet[2608]: E1009 01:08:50.757364 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.757927 kubelet[2608]: E1009 01:08:50.757771 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.757927 kubelet[2608]: W1009 01:08:50.757794 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.757927 kubelet[2608]: E1009 01:08:50.757815 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.758129 kubelet[2608]: E1009 01:08:50.758113 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.758267 kubelet[2608]: W1009 01:08:50.758184 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.758267 kubelet[2608]: E1009 01:08:50.758204 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.758267 kubelet[2608]: I1009 01:08:50.758227 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/171922d5-d611-4e67-8c10-daef097d9ad9-varrun\") pod \"csi-node-driver-w4q2j\" (UID: \"171922d5-d611-4e67-8c10-daef097d9ad9\") " pod="calico-system/csi-node-driver-w4q2j" Oct 9 01:08:50.758626 kubelet[2608]: E1009 01:08:50.758609 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.758626 kubelet[2608]: W1009 01:08:50.758625 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.758725 kubelet[2608]: E1009 01:08:50.758658 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.758877 kubelet[2608]: E1009 01:08:50.758856 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.758877 kubelet[2608]: W1009 01:08:50.758875 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.758945 kubelet[2608]: E1009 01:08:50.758894 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.759232 kubelet[2608]: E1009 01:08:50.759129 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.759232 kubelet[2608]: W1009 01:08:50.759144 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.759232 kubelet[2608]: E1009 01:08:50.759161 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.759232 kubelet[2608]: I1009 01:08:50.759182 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/171922d5-d611-4e67-8c10-daef097d9ad9-kubelet-dir\") pod \"csi-node-driver-w4q2j\" (UID: \"171922d5-d611-4e67-8c10-daef097d9ad9\") " pod="calico-system/csi-node-driver-w4q2j" Oct 9 01:08:50.759397 kubelet[2608]: E1009 01:08:50.759377 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.759397 kubelet[2608]: W1009 01:08:50.759391 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.759397 kubelet[2608]: E1009 01:08:50.759408 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.759604 kubelet[2608]: E1009 01:08:50.759593 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.759604 kubelet[2608]: W1009 01:08:50.759604 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.759658 kubelet[2608]: E1009 01:08:50.759633 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.759684 kubelet[2608]: I1009 01:08:50.759668 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/171922d5-d611-4e67-8c10-daef097d9ad9-socket-dir\") pod \"csi-node-driver-w4q2j\" (UID: \"171922d5-d611-4e67-8c10-daef097d9ad9\") " pod="calico-system/csi-node-driver-w4q2j" Oct 9 01:08:50.759744 kubelet[2608]: E1009 01:08:50.759734 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.759744 kubelet[2608]: W1009 01:08:50.759744 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.759797 kubelet[2608]: E1009 01:08:50.759777 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.759911 kubelet[2608]: E1009 01:08:50.759900 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.759911 kubelet[2608]: W1009 01:08:50.759911 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.759972 kubelet[2608]: E1009 01:08:50.759927 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.760103 kubelet[2608]: E1009 01:08:50.760093 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.760103 kubelet[2608]: W1009 01:08:50.760102 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.760166 kubelet[2608]: E1009 01:08:50.760117 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.760249 kubelet[2608]: E1009 01:08:50.760240 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.760249 kubelet[2608]: W1009 01:08:50.760249 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.760307 kubelet[2608]: E1009 01:08:50.760258 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.760417 kubelet[2608]: E1009 01:08:50.760405 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.760417 kubelet[2608]: W1009 01:08:50.760416 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.760474 kubelet[2608]: E1009 01:08:50.760428 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.760576 kubelet[2608]: E1009 01:08:50.760566 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.760576 kubelet[2608]: W1009 01:08:50.760576 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.760644 kubelet[2608]: E1009 01:08:50.760586 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.763272 systemd[1]: Started cri-containerd-033930cb049b32ff0cc728cc77213ed82bcc53f734475cbf58636fcdd6403090.scope - libcontainer container 033930cb049b32ff0cc728cc77213ed82bcc53f734475cbf58636fcdd6403090. Oct 9 01:08:50.807085 containerd[1441]: time="2024-10-09T01:08:50.807004273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-59595ccb94-kw2fk,Uid:a1816586-6df3-4eaf-8384-b30cb280e51a,Namespace:calico-system,Attempt:0,} returns sandbox id \"033930cb049b32ff0cc728cc77213ed82bcc53f734475cbf58636fcdd6403090\"" Oct 9 01:08:50.807843 kubelet[2608]: E1009 01:08:50.807808 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:50.809376 containerd[1441]: time="2024-10-09T01:08:50.809343669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:08:50.861094 kubelet[2608]: E1009 01:08:50.861048 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.861094 kubelet[2608]: W1009 01:08:50.861081 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.861094 kubelet[2608]: E1009 01:08:50.861104 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.861615 kubelet[2608]: E1009 01:08:50.861282 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.861615 kubelet[2608]: W1009 01:08:50.861291 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.861615 kubelet[2608]: E1009 01:08:50.861304 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.861615 kubelet[2608]: E1009 01:08:50.861489 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.861615 kubelet[2608]: W1009 01:08:50.861498 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.861615 kubelet[2608]: E1009 01:08:50.861515 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863390 kubelet[2608]: E1009 01:08:50.861670 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863390 kubelet[2608]: W1009 01:08:50.861678 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863390 kubelet[2608]: E1009 01:08:50.861693 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863390 kubelet[2608]: E1009 01:08:50.861876 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863390 kubelet[2608]: W1009 01:08:50.861900 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863390 kubelet[2608]: E1009 01:08:50.861924 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863390 kubelet[2608]: E1009 01:08:50.862094 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863390 kubelet[2608]: W1009 01:08:50.862103 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863390 kubelet[2608]: E1009 01:08:50.862120 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863390 kubelet[2608]: E1009 01:08:50.862249 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863626 kubelet[2608]: W1009 01:08:50.862257 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863626 kubelet[2608]: E1009 01:08:50.862271 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863626 kubelet[2608]: E1009 01:08:50.862484 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863626 kubelet[2608]: W1009 01:08:50.862493 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863626 kubelet[2608]: E1009 01:08:50.862510 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863626 kubelet[2608]: E1009 01:08:50.862646 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863626 kubelet[2608]: W1009 01:08:50.862652 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863626 kubelet[2608]: E1009 01:08:50.862667 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863626 kubelet[2608]: E1009 01:08:50.862791 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863626 kubelet[2608]: W1009 01:08:50.862797 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863843 kubelet[2608]: E1009 01:08:50.862811 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863843 kubelet[2608]: E1009 01:08:50.862988 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863843 kubelet[2608]: W1009 01:08:50.862995 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863843 kubelet[2608]: E1009 01:08:50.863022 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863843 kubelet[2608]: E1009 01:08:50.863154 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863843 kubelet[2608]: W1009 01:08:50.863163 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863843 kubelet[2608]: E1009 01:08:50.863185 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.863843 kubelet[2608]: E1009 01:08:50.863298 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.863843 kubelet[2608]: W1009 01:08:50.863305 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.863843 kubelet[2608]: E1009 01:08:50.863341 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.864051 kubelet[2608]: E1009 01:08:50.863456 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.864051 kubelet[2608]: W1009 01:08:50.863464 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.864051 kubelet[2608]: E1009 01:08:50.863477 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.864051 kubelet[2608]: E1009 01:08:50.863627 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.864051 kubelet[2608]: W1009 01:08:50.863635 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.864051 kubelet[2608]: E1009 01:08:50.863644 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.864051 kubelet[2608]: E1009 01:08:50.863824 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.864051 kubelet[2608]: W1009 01:08:50.863831 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.864051 kubelet[2608]: E1009 01:08:50.863842 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.864051 kubelet[2608]: E1009 01:08:50.863986 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.864271 kubelet[2608]: W1009 01:08:50.863993 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.864271 kubelet[2608]: E1009 01:08:50.864004 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.864271 kubelet[2608]: E1009 01:08:50.864196 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.864271 kubelet[2608]: W1009 01:08:50.864205 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.864271 kubelet[2608]: E1009 01:08:50.864215 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.864453 kubelet[2608]: E1009 01:08:50.864399 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.864453 kubelet[2608]: W1009 01:08:50.864406 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.864503 kubelet[2608]: E1009 01:08:50.864451 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.865417 kubelet[2608]: E1009 01:08:50.864571 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.865417 kubelet[2608]: W1009 01:08:50.864581 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.865417 kubelet[2608]: E1009 01:08:50.864620 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.865417 kubelet[2608]: E1009 01:08:50.864713 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.865417 kubelet[2608]: W1009 01:08:50.864720 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.865417 kubelet[2608]: E1009 01:08:50.864730 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.865417 kubelet[2608]: E1009 01:08:50.864865 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.865417 kubelet[2608]: W1009 01:08:50.864872 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.865417 kubelet[2608]: E1009 01:08:50.864892 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.865417 kubelet[2608]: E1009 01:08:50.865047 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.866553 kubelet[2608]: W1009 01:08:50.865063 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.866553 kubelet[2608]: E1009 01:08:50.865074 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.866553 kubelet[2608]: E1009 01:08:50.865629 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.866553 kubelet[2608]: W1009 01:08:50.865647 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.866553 kubelet[2608]: E1009 01:08:50.865669 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.866553 kubelet[2608]: E1009 01:08:50.865923 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.866553 kubelet[2608]: W1009 01:08:50.865934 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.866553 kubelet[2608]: E1009 01:08:50.865978 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.866553 kubelet[2608]: E1009 01:08:50.866195 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.866553 kubelet[2608]: W1009 01:08:50.866205 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.866749 kubelet[2608]: E1009 01:08:50.866217 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.876994 kubelet[2608]: E1009 01:08:50.876902 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.876994 kubelet[2608]: W1009 01:08:50.876922 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.876994 kubelet[2608]: E1009 01:08:50.876944 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:50.965522 kubelet[2608]: E1009 01:08:50.965466 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:50.965522 kubelet[2608]: W1009 01:08:50.965515 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:50.965680 kubelet[2608]: E1009 01:08:50.965540 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:51.066241 kubelet[2608]: E1009 01:08:51.066213 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:51.066241 kubelet[2608]: W1009 01:08:51.066235 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:51.066404 kubelet[2608]: E1009 01:08:51.066257 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:51.167298 kubelet[2608]: E1009 01:08:51.167193 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:51.167298 kubelet[2608]: W1009 01:08:51.167216 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:51.167298 kubelet[2608]: E1009 01:08:51.167237 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:51.267894 kubelet[2608]: E1009 01:08:51.267832 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:51.267894 kubelet[2608]: W1009 01:08:51.267855 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:51.267894 kubelet[2608]: E1009 01:08:51.267877 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:51.368849 kubelet[2608]: E1009 01:08:51.368769 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:51.368849 kubelet[2608]: W1009 01:08:51.368791 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:51.368849 kubelet[2608]: E1009 01:08:51.368810 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:51.469584 kubelet[2608]: E1009 01:08:51.469483 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:51.469584 kubelet[2608]: W1009 01:08:51.469504 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:51.469584 kubelet[2608]: E1009 01:08:51.469524 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:51.538368 kubelet[2608]: E1009 01:08:51.538339 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:51.538368 kubelet[2608]: W1009 01:08:51.538362 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:51.538514 kubelet[2608]: E1009 01:08:51.538383 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:51.616509 kubelet[2608]: E1009 01:08:51.616469 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:51.616974 containerd[1441]: time="2024-10-09T01:08:51.616918646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gr8dp,Uid:c18aa682-ded7-4d92-88df-b42cc3f139da,Namespace:calico-system,Attempt:0,}" Oct 9 01:08:51.668523 containerd[1441]: time="2024-10-09T01:08:51.668358550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:08:51.668523 containerd[1441]: time="2024-10-09T01:08:51.668451684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:08:51.668523 containerd[1441]: time="2024-10-09T01:08:51.668500451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:51.668833 containerd[1441]: time="2024-10-09T01:08:51.668614227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:08:51.687215 systemd[1]: Started cri-containerd-325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75.scope - libcontainer container 325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75. Oct 9 01:08:51.718307 containerd[1441]: time="2024-10-09T01:08:51.718248269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gr8dp,Uid:c18aa682-ded7-4d92-88df-b42cc3f139da,Namespace:calico-system,Attempt:0,} returns sandbox id \"325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75\"" Oct 9 01:08:51.718953 kubelet[2608]: E1009 01:08:51.718924 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:52.009325 containerd[1441]: time="2024-10-09T01:08:52.009274834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:52.010248 containerd[1441]: time="2024-10-09T01:08:52.010205524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 9 01:08:52.011559 containerd[1441]: time="2024-10-09T01:08:52.011518268Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:52.013489 containerd[1441]: time="2024-10-09T01:08:52.013450859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:52.014416 containerd[1441]: time="2024-10-09T01:08:52.014290976Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.204913062s" Oct 9 01:08:52.014416 containerd[1441]: time="2024-10-09T01:08:52.014318940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 9 01:08:52.015145 containerd[1441]: time="2024-10-09T01:08:52.014793647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:08:52.021358 containerd[1441]: time="2024-10-09T01:08:52.021322281Z" level=info msg="CreateContainer within sandbox \"033930cb049b32ff0cc728cc77213ed82bcc53f734475cbf58636fcdd6403090\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:08:52.031590 containerd[1441]: time="2024-10-09T01:08:52.031550073Z" level=info msg="CreateContainer within sandbox \"033930cb049b32ff0cc728cc77213ed82bcc53f734475cbf58636fcdd6403090\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c6eda92d4f677c4bb3ca1b555bc98c22b25a44b4ef45d2bcbe990cf916588d34\"" Oct 9 01:08:52.032274 containerd[1441]: time="2024-10-09T01:08:52.032242330Z" level=info msg="StartContainer for \"c6eda92d4f677c4bb3ca1b555bc98c22b25a44b4ef45d2bcbe990cf916588d34\"" Oct 9 01:08:52.056311 systemd[1]: Started cri-containerd-c6eda92d4f677c4bb3ca1b555bc98c22b25a44b4ef45d2bcbe990cf916588d34.scope - libcontainer container c6eda92d4f677c4bb3ca1b555bc98c22b25a44b4ef45d2bcbe990cf916588d34. Oct 9 01:08:52.092531 containerd[1441]: time="2024-10-09T01:08:52.092492525Z" level=info msg="StartContainer for \"c6eda92d4f677c4bb3ca1b555bc98c22b25a44b4ef45d2bcbe990cf916588d34\" returns successfully" Oct 9 01:08:52.396529 kubelet[2608]: E1009 01:08:52.396480 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4q2j" podUID="171922d5-d611-4e67-8c10-daef097d9ad9" Oct 9 01:08:52.446462 kubelet[2608]: E1009 01:08:52.446421 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:52.457003 kubelet[2608]: I1009 01:08:52.456561 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-59595ccb94-kw2fk" podStartSLOduration=1.25098474 podStartE2EDuration="2.456521734s" podCreationTimestamp="2024-10-09 01:08:50 +0000 UTC" firstStartedPulling="2024-10-09 01:08:50.809078908 +0000 UTC m=+21.508546353" lastFinishedPulling="2024-10-09 01:08:52.014615902 +0000 UTC m=+22.714083347" observedRunningTime="2024-10-09 01:08:52.45635175 +0000 UTC m=+23.155819195" watchObservedRunningTime="2024-10-09 01:08:52.456521734 +0000 UTC m=+23.155989179" Oct 9 01:08:52.470417 kubelet[2608]: E1009 01:08:52.470391 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.470417 kubelet[2608]: W1009 01:08:52.470412 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.470417 kubelet[2608]: E1009 01:08:52.470433 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.470989 kubelet[2608]: E1009 01:08:52.470974 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.470989 kubelet[2608]: W1009 01:08:52.470990 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.471101 kubelet[2608]: E1009 01:08:52.471005 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.471508 kubelet[2608]: E1009 01:08:52.471494 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.471546 kubelet[2608]: W1009 01:08:52.471509 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.471546 kubelet[2608]: E1009 01:08:52.471522 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.472290 kubelet[2608]: E1009 01:08:52.472275 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.472328 kubelet[2608]: W1009 01:08:52.472292 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.472328 kubelet[2608]: E1009 01:08:52.472307 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.472525 kubelet[2608]: E1009 01:08:52.472512 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.472525 kubelet[2608]: W1009 01:08:52.472525 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.472603 kubelet[2608]: E1009 01:08:52.472538 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.472839 kubelet[2608]: E1009 01:08:52.472825 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.472881 kubelet[2608]: W1009 01:08:52.472839 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.472881 kubelet[2608]: E1009 01:08:52.472853 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.473077 kubelet[2608]: E1009 01:08:52.473046 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.473077 kubelet[2608]: W1009 01:08:52.473077 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.473148 kubelet[2608]: E1009 01:08:52.473091 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.473717 kubelet[2608]: E1009 01:08:52.473698 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.473759 kubelet[2608]: W1009 01:08:52.473720 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.473759 kubelet[2608]: E1009 01:08:52.473734 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.474507 kubelet[2608]: E1009 01:08:52.474491 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.474507 kubelet[2608]: W1009 01:08:52.474507 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.474595 kubelet[2608]: E1009 01:08:52.474525 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.474781 kubelet[2608]: E1009 01:08:52.474768 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.474781 kubelet[2608]: W1009 01:08:52.474779 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.474842 kubelet[2608]: E1009 01:08:52.474791 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.475010 kubelet[2608]: E1009 01:08:52.474998 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.475010 kubelet[2608]: W1009 01:08:52.475009 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.475054 kubelet[2608]: E1009 01:08:52.475021 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.475251 kubelet[2608]: E1009 01:08:52.475237 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.475251 kubelet[2608]: W1009 01:08:52.475250 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.475349 kubelet[2608]: E1009 01:08:52.475262 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.475486 kubelet[2608]: E1009 01:08:52.475474 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.475486 kubelet[2608]: W1009 01:08:52.475486 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.475546 kubelet[2608]: E1009 01:08:52.475500 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.475745 kubelet[2608]: E1009 01:08:52.475733 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.475745 kubelet[2608]: W1009 01:08:52.475745 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.475807 kubelet[2608]: E1009 01:08:52.475757 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.475992 kubelet[2608]: E1009 01:08:52.475980 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.475992 kubelet[2608]: W1009 01:08:52.475992 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.476041 kubelet[2608]: E1009 01:08:52.476004 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.476369 kubelet[2608]: E1009 01:08:52.476328 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.476545 kubelet[2608]: W1009 01:08:52.476371 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.476545 kubelet[2608]: E1009 01:08:52.476385 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.476833 kubelet[2608]: E1009 01:08:52.476814 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.476833 kubelet[2608]: W1009 01:08:52.476833 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.476894 kubelet[2608]: E1009 01:08:52.476856 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.477217 kubelet[2608]: E1009 01:08:52.477201 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.477217 kubelet[2608]: W1009 01:08:52.477215 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.477304 kubelet[2608]: E1009 01:08:52.477232 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.477456 kubelet[2608]: E1009 01:08:52.477445 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.477488 kubelet[2608]: W1009 01:08:52.477458 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.477488 kubelet[2608]: E1009 01:08:52.477470 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.477613 kubelet[2608]: E1009 01:08:52.477599 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.477613 kubelet[2608]: W1009 01:08:52.477611 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.477683 kubelet[2608]: E1009 01:08:52.477621 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.477752 kubelet[2608]: E1009 01:08:52.477743 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.477752 kubelet[2608]: W1009 01:08:52.477752 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.477868 kubelet[2608]: E1009 01:08:52.477831 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.477908 kubelet[2608]: E1009 01:08:52.477886 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.477908 kubelet[2608]: W1009 01:08:52.477895 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.477957 kubelet[2608]: E1009 01:08:52.477934 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.478034 kubelet[2608]: E1009 01:08:52.478024 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.478034 kubelet[2608]: W1009 01:08:52.478033 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.478166 kubelet[2608]: E1009 01:08:52.478052 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.478365 kubelet[2608]: E1009 01:08:52.478351 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.478395 kubelet[2608]: W1009 01:08:52.478365 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.478395 kubelet[2608]: E1009 01:08:52.478381 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.479482 kubelet[2608]: E1009 01:08:52.479463 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.479482 kubelet[2608]: W1009 01:08:52.479481 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.479559 kubelet[2608]: E1009 01:08:52.479502 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.479971 kubelet[2608]: E1009 01:08:52.479949 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.479971 kubelet[2608]: W1009 01:08:52.479967 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.480078 kubelet[2608]: E1009 01:08:52.479986 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.480248 kubelet[2608]: E1009 01:08:52.480235 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.480248 kubelet[2608]: W1009 01:08:52.480247 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.480376 kubelet[2608]: E1009 01:08:52.480307 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.480671 kubelet[2608]: E1009 01:08:52.480654 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.480671 kubelet[2608]: W1009 01:08:52.480670 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.480767 kubelet[2608]: E1009 01:08:52.480689 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.480921 kubelet[2608]: E1009 01:08:52.480908 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.480921 kubelet[2608]: W1009 01:08:52.480920 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.480979 kubelet[2608]: E1009 01:08:52.480937 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.481489 kubelet[2608]: E1009 01:08:52.481472 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.481489 kubelet[2608]: W1009 01:08:52.481488 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.481563 kubelet[2608]: E1009 01:08:52.481507 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.481799 kubelet[2608]: E1009 01:08:52.481784 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.481799 kubelet[2608]: W1009 01:08:52.481799 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.481865 kubelet[2608]: E1009 01:08:52.481819 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.482311 kubelet[2608]: E1009 01:08:52.482294 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.482311 kubelet[2608]: W1009 01:08:52.482311 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.482382 kubelet[2608]: E1009 01:08:52.482325 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:52.486434 kubelet[2608]: E1009 01:08:52.486418 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:08:52.486434 kubelet[2608]: W1009 01:08:52.486432 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:08:52.486512 kubelet[2608]: E1009 01:08:52.486447 2608 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:08:53.030978 containerd[1441]: time="2024-10-09T01:08:53.030926273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:53.031693 containerd[1441]: time="2024-10-09T01:08:53.031656531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 9 01:08:53.032250 containerd[1441]: time="2024-10-09T01:08:53.032223567Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:53.034028 containerd[1441]: time="2024-10-09T01:08:53.033999006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:53.035075 containerd[1441]: time="2024-10-09T01:08:53.034735225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.019905494s" Oct 9 01:08:53.035075 containerd[1441]: time="2024-10-09T01:08:53.034768070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 9 01:08:53.053583 containerd[1441]: time="2024-10-09T01:08:53.037903411Z" level=info msg="CreateContainer within sandbox \"325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:08:53.066902 containerd[1441]: time="2024-10-09T01:08:53.066863947Z" level=info msg="CreateContainer within sandbox \"325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130\"" Oct 9 01:08:53.067499 containerd[1441]: time="2024-10-09T01:08:53.067459227Z" level=info msg="StartContainer for \"d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130\"" Oct 9 01:08:53.104488 systemd[1]: Started cri-containerd-d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130.scope - libcontainer container d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130. Oct 9 01:08:53.143655 containerd[1441]: time="2024-10-09T01:08:53.143601788Z" level=info msg="StartContainer for \"d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130\" returns successfully" Oct 9 01:08:53.168990 systemd[1]: cri-containerd-d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130.scope: Deactivated successfully. Oct 9 01:08:53.217817 containerd[1441]: time="2024-10-09T01:08:53.214249450Z" level=info msg="shim disconnected" id=d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130 namespace=k8s.io Oct 9 01:08:53.217817 containerd[1441]: time="2024-10-09T01:08:53.217816010Z" level=warning msg="cleaning up after shim disconnected" id=d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130 namespace=k8s.io Oct 9 01:08:53.217817 containerd[1441]: time="2024-10-09T01:08:53.217827892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:08:53.453381 kubelet[2608]: I1009 01:08:53.452092 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:08:53.453381 kubelet[2608]: E1009 01:08:53.452701 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:53.453381 kubelet[2608]: E1009 01:08:53.452931 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:53.454921 containerd[1441]: time="2024-10-09T01:08:53.454573615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:08:53.632913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d113b4f8b2c1715faca0b187c3bd607677323137f6f215012e886c5b62833130-rootfs.mount: Deactivated successfully. Oct 9 01:08:54.396342 kubelet[2608]: E1009 01:08:54.396306 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4q2j" podUID="171922d5-d611-4e67-8c10-daef097d9ad9" Oct 9 01:08:56.396249 kubelet[2608]: E1009 01:08:56.396211 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4q2j" podUID="171922d5-d611-4e67-8c10-daef097d9ad9" Oct 9 01:08:57.308542 containerd[1441]: time="2024-10-09T01:08:57.308493960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:57.309120 containerd[1441]: time="2024-10-09T01:08:57.309035503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 9 01:08:57.309735 containerd[1441]: time="2024-10-09T01:08:57.309688539Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:57.311844 containerd[1441]: time="2024-10-09T01:08:57.311809664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:08:57.313297 containerd[1441]: time="2024-10-09T01:08:57.313268433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.858661534s" Oct 9 01:08:57.313378 containerd[1441]: time="2024-10-09T01:08:57.313296596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 9 01:08:57.318174 containerd[1441]: time="2024-10-09T01:08:57.318107553Z" level=info msg="CreateContainer within sandbox \"325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:08:57.337445 containerd[1441]: time="2024-10-09T01:08:57.337401785Z" level=info msg="CreateContainer within sandbox \"325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77\"" Oct 9 01:08:57.338087 containerd[1441]: time="2024-10-09T01:08:57.338066022Z" level=info msg="StartContainer for \"ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77\"" Oct 9 01:08:57.369207 systemd[1]: Started cri-containerd-ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77.scope - libcontainer container ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77. Oct 9 01:08:57.392508 containerd[1441]: time="2024-10-09T01:08:57.392403269Z" level=info msg="StartContainer for \"ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77\" returns successfully" Oct 9 01:08:57.474753 kubelet[2608]: E1009 01:08:57.473221 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:57.935203 containerd[1441]: time="2024-10-09T01:08:57.935159547Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:08:57.937360 systemd[1]: cri-containerd-ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77.scope: Deactivated successfully. Oct 9 01:08:57.956252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77-rootfs.mount: Deactivated successfully. Oct 9 01:08:57.964825 kubelet[2608]: I1009 01:08:57.964054 2608 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:08:57.987766 containerd[1441]: time="2024-10-09T01:08:57.987022388Z" level=info msg="shim disconnected" id=ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77 namespace=k8s.io Oct 9 01:08:57.987766 containerd[1441]: time="2024-10-09T01:08:57.987583253Z" level=warning msg="cleaning up after shim disconnected" id=ade5c01993468b3599397a3cbadc2299558c2bb600519ec689464d492bb92e77 namespace=k8s.io Oct 9 01:08:57.987766 containerd[1441]: time="2024-10-09T01:08:57.987592534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:08:57.998697 kubelet[2608]: I1009 01:08:57.998667 2608 topology_manager.go:215] "Topology Admit Handler" podUID="7a1c35a3-8044-40a1-816a-48efa14a6135" podNamespace="calico-system" podName="calico-kube-controllers-6dd6bf9548-fflk7" Oct 9 01:08:58.006355 systemd[1]: Created slice kubepods-besteffort-pod7a1c35a3_8044_40a1_816a_48efa14a6135.slice - libcontainer container kubepods-besteffort-pod7a1c35a3_8044_40a1_816a_48efa14a6135.slice. Oct 9 01:08:58.010491 kubelet[2608]: I1009 01:08:58.010456 2608 topology_manager.go:215] "Topology Admit Handler" podUID="2834f708-e1ad-459c-8c3f-1cb2de9ef7de" podNamespace="kube-system" podName="coredns-76f75df574-cmfsl" Oct 9 01:08:58.010831 kubelet[2608]: I1009 01:08:58.010618 2608 topology_manager.go:215] "Topology Admit Handler" podUID="4782287b-1310-451d-8736-24d2e3baa8fe" podNamespace="kube-system" podName="coredns-76f75df574-2mdww" Oct 9 01:08:58.017556 systemd[1]: Created slice kubepods-burstable-pod2834f708_e1ad_459c_8c3f_1cb2de9ef7de.slice - libcontainer container kubepods-burstable-pod2834f708_e1ad_459c_8c3f_1cb2de9ef7de.slice. Oct 9 01:08:58.023253 systemd[1]: Created slice kubepods-burstable-pod4782287b_1310_451d_8736_24d2e3baa8fe.slice - libcontainer container kubepods-burstable-pod4782287b_1310_451d_8736_24d2e3baa8fe.slice. Oct 9 01:08:58.111813 kubelet[2608]: I1009 01:08:58.111704 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2834f708-e1ad-459c-8c3f-1cb2de9ef7de-config-volume\") pod \"coredns-76f75df574-cmfsl\" (UID: \"2834f708-e1ad-459c-8c3f-1cb2de9ef7de\") " pod="kube-system/coredns-76f75df574-cmfsl" Oct 9 01:08:58.111813 kubelet[2608]: I1009 01:08:58.111757 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4782287b-1310-451d-8736-24d2e3baa8fe-config-volume\") pod \"coredns-76f75df574-2mdww\" (UID: \"4782287b-1310-451d-8736-24d2e3baa8fe\") " pod="kube-system/coredns-76f75df574-2mdww" Oct 9 01:08:58.111813 kubelet[2608]: I1009 01:08:58.111782 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk5br\" (UniqueName: \"kubernetes.io/projected/2834f708-e1ad-459c-8c3f-1cb2de9ef7de-kube-api-access-xk5br\") pod \"coredns-76f75df574-cmfsl\" (UID: \"2834f708-e1ad-459c-8c3f-1cb2de9ef7de\") " pod="kube-system/coredns-76f75df574-cmfsl" Oct 9 01:08:58.111992 kubelet[2608]: I1009 01:08:58.111841 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a1c35a3-8044-40a1-816a-48efa14a6135-tigera-ca-bundle\") pod \"calico-kube-controllers-6dd6bf9548-fflk7\" (UID: \"7a1c35a3-8044-40a1-816a-48efa14a6135\") " pod="calico-system/calico-kube-controllers-6dd6bf9548-fflk7" Oct 9 01:08:58.111992 kubelet[2608]: I1009 01:08:58.111910 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d52b\" (UniqueName: \"kubernetes.io/projected/4782287b-1310-451d-8736-24d2e3baa8fe-kube-api-access-4d52b\") pod \"coredns-76f75df574-2mdww\" (UID: \"4782287b-1310-451d-8736-24d2e3baa8fe\") " pod="kube-system/coredns-76f75df574-2mdww" Oct 9 01:08:58.111992 kubelet[2608]: I1009 01:08:58.111990 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psjpw\" (UniqueName: \"kubernetes.io/projected/7a1c35a3-8044-40a1-816a-48efa14a6135-kube-api-access-psjpw\") pod \"calico-kube-controllers-6dd6bf9548-fflk7\" (UID: \"7a1c35a3-8044-40a1-816a-48efa14a6135\") " pod="calico-system/calico-kube-controllers-6dd6bf9548-fflk7" Oct 9 01:08:58.310107 containerd[1441]: time="2024-10-09T01:08:58.309982042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd6bf9548-fflk7,Uid:7a1c35a3-8044-40a1-816a-48efa14a6135,Namespace:calico-system,Attempt:0,}" Oct 9 01:08:58.322018 kubelet[2608]: E1009 01:08:58.321943 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:58.323397 containerd[1441]: time="2024-10-09T01:08:58.323274887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cmfsl,Uid:2834f708-e1ad-459c-8c3f-1cb2de9ef7de,Namespace:kube-system,Attempt:0,}" Oct 9 01:08:58.326512 kubelet[2608]: E1009 01:08:58.326205 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:58.327152 containerd[1441]: time="2024-10-09T01:08:58.326644144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2mdww,Uid:4782287b-1310-451d-8736-24d2e3baa8fe,Namespace:kube-system,Attempt:0,}" Oct 9 01:08:58.435739 systemd[1]: Created slice kubepods-besteffort-pod171922d5_d611_4e67_8c10_daef097d9ad9.slice - libcontainer container kubepods-besteffort-pod171922d5_d611_4e67_8c10_daef097d9ad9.slice. Oct 9 01:08:58.452875 containerd[1441]: time="2024-10-09T01:08:58.452571651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4q2j,Uid:171922d5-d611-4e67-8c10-daef097d9ad9,Namespace:calico-system,Attempt:0,}" Oct 9 01:08:58.488089 kubelet[2608]: E1009 01:08:58.483271 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:58.488853 containerd[1441]: time="2024-10-09T01:08:58.488809820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:08:58.677978 containerd[1441]: time="2024-10-09T01:08:58.677930827Z" level=error msg="Failed to destroy network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.679105 containerd[1441]: time="2024-10-09T01:08:58.678393959Z" level=error msg="Failed to destroy network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.679463 containerd[1441]: time="2024-10-09T01:08:58.679429714Z" level=error msg="encountered an error cleaning up failed sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.679523 containerd[1441]: time="2024-10-09T01:08:58.679487161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd6bf9548-fflk7,Uid:7a1c35a3-8044-40a1-816a-48efa14a6135,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.680475 containerd[1441]: time="2024-10-09T01:08:58.680443708Z" level=error msg="encountered an error cleaning up failed sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.681142 containerd[1441]: time="2024-10-09T01:08:58.680493833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2mdww,Uid:4782287b-1310-451d-8736-24d2e3baa8fe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.681019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af-shm.mount: Deactivated successfully. Oct 9 01:08:58.681124 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34-shm.mount: Deactivated successfully. Oct 9 01:08:58.682573 containerd[1441]: time="2024-10-09T01:08:58.682537301Z" level=error msg="Failed to destroy network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.683731 kubelet[2608]: E1009 01:08:58.683698 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.683865 kubelet[2608]: E1009 01:08:58.683774 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dd6bf9548-fflk7" Oct 9 01:08:58.683865 kubelet[2608]: E1009 01:08:58.683797 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dd6bf9548-fflk7" Oct 9 01:08:58.683865 kubelet[2608]: E1009 01:08:58.683862 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dd6bf9548-fflk7_calico-system(7a1c35a3-8044-40a1-816a-48efa14a6135)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dd6bf9548-fflk7_calico-system(7a1c35a3-8044-40a1-816a-48efa14a6135)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dd6bf9548-fflk7" podUID="7a1c35a3-8044-40a1-816a-48efa14a6135" Oct 9 01:08:58.684127 containerd[1441]: time="2024-10-09T01:08:58.683998785Z" level=error msg="Failed to destroy network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.684273 containerd[1441]: time="2024-10-09T01:08:58.684157082Z" level=error msg="encountered an error cleaning up failed sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.684273 containerd[1441]: time="2024-10-09T01:08:58.684205448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cmfsl,Uid:2834f708-e1ad-459c-8c3f-1cb2de9ef7de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.684754 kubelet[2608]: E1009 01:08:58.684728 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.684794 kubelet[2608]: E1009 01:08:58.684779 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2mdww" Oct 9 01:08:58.684832 kubelet[2608]: E1009 01:08:58.684799 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-2mdww" Oct 9 01:08:58.684864 kubelet[2608]: E1009 01:08:58.684853 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-2mdww_kube-system(4782287b-1310-451d-8736-24d2e3baa8fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-2mdww_kube-system(4782287b-1310-451d-8736-24d2e3baa8fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2mdww" podUID="4782287b-1310-451d-8736-24d2e3baa8fe" Oct 9 01:08:58.685039 kubelet[2608]: E1009 01:08:58.684984 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.685039 kubelet[2608]: E1009 01:08:58.685010 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cmfsl" Oct 9 01:08:58.685039 kubelet[2608]: E1009 01:08:58.685027 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cmfsl" Oct 9 01:08:58.685281 kubelet[2608]: E1009 01:08:58.685102 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cmfsl_kube-system(2834f708-e1ad-459c-8c3f-1cb2de9ef7de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cmfsl_kube-system(2834f708-e1ad-459c-8c3f-1cb2de9ef7de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cmfsl" podUID="2834f708-e1ad-459c-8c3f-1cb2de9ef7de" Oct 9 01:08:58.686186 containerd[1441]: time="2024-10-09T01:08:58.686124222Z" level=error msg="encountered an error cleaning up failed sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.686186 containerd[1441]: time="2024-10-09T01:08:58.686174508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4q2j,Uid:171922d5-d611-4e67-8c10-daef097d9ad9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.686356 kubelet[2608]: E1009 01:08:58.686321 2608 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:58.686356 kubelet[2608]: E1009 01:08:58.686358 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4q2j" Oct 9 01:08:58.686441 kubelet[2608]: E1009 01:08:58.686376 2608 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4q2j" Oct 9 01:08:58.686441 kubelet[2608]: E1009 01:08:58.686417 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w4q2j_calico-system(171922d5-d611-4e67-8c10-daef097d9ad9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w4q2j_calico-system(171922d5-d611-4e67-8c10-daef097d9ad9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w4q2j" podUID="171922d5-d611-4e67-8c10-daef097d9ad9" Oct 9 01:08:58.946440 kubelet[2608]: I1009 01:08:58.946320 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:08:58.947430 kubelet[2608]: E1009 01:08:58.947411 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:59.338581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5-shm.mount: Deactivated successfully. Oct 9 01:08:59.338699 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5-shm.mount: Deactivated successfully. Oct 9 01:08:59.493168 kubelet[2608]: I1009 01:08:59.491515 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:08:59.493538 containerd[1441]: time="2024-10-09T01:08:59.492865146Z" level=info msg="StopPodSandbox for \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\"" Oct 9 01:08:59.503101 kubelet[2608]: I1009 01:08:59.502103 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:08:59.505590 containerd[1441]: time="2024-10-09T01:08:59.505531833Z" level=info msg="StopPodSandbox for \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\"" Oct 9 01:08:59.505990 kubelet[2608]: I1009 01:08:59.505953 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:08:59.507286 containerd[1441]: time="2024-10-09T01:08:59.507252339Z" level=info msg="StopPodSandbox for \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\"" Oct 9 01:08:59.513519 containerd[1441]: time="2024-10-09T01:08:59.513473331Z" level=info msg="Ensure that sandbox 998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34 in task-service has been cleanup successfully" Oct 9 01:08:59.514133 containerd[1441]: time="2024-10-09T01:08:59.514099278Z" level=info msg="Ensure that sandbox c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5 in task-service has been cleanup successfully" Oct 9 01:08:59.517091 containerd[1441]: time="2024-10-09T01:08:59.516794610Z" level=info msg="Ensure that sandbox 587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af in task-service has been cleanup successfully" Oct 9 01:08:59.526196 kubelet[2608]: E1009 01:08:59.525125 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:08:59.526196 kubelet[2608]: I1009 01:08:59.525170 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:08:59.529234 containerd[1441]: time="2024-10-09T01:08:59.529189668Z" level=info msg="StopPodSandbox for \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\"" Oct 9 01:08:59.532548 containerd[1441]: time="2024-10-09T01:08:59.532305764Z" level=info msg="Ensure that sandbox c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5 in task-service has been cleanup successfully" Oct 9 01:08:59.586347 containerd[1441]: time="2024-10-09T01:08:59.586266271Z" level=error msg="StopPodSandbox for \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\" failed" error="failed to destroy network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:59.586876 kubelet[2608]: E1009 01:08:59.586659 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:08:59.587138 kubelet[2608]: E1009 01:08:59.587120 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34"} Oct 9 01:08:59.587683 kubelet[2608]: E1009 01:08:59.587281 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a1c35a3-8044-40a1-816a-48efa14a6135\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:08:59.587683 kubelet[2608]: E1009 01:08:59.587321 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a1c35a3-8044-40a1-816a-48efa14a6135\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dd6bf9548-fflk7" podUID="7a1c35a3-8044-40a1-816a-48efa14a6135" Oct 9 01:08:59.603988 containerd[1441]: time="2024-10-09T01:08:59.602049855Z" level=error msg="StopPodSandbox for \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\" failed" error="failed to destroy network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:59.604515 containerd[1441]: time="2024-10-09T01:08:59.603896374Z" level=error msg="StopPodSandbox for \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\" failed" error="failed to destroy network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:59.604728 kubelet[2608]: E1009 01:08:59.604635 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:08:59.604728 kubelet[2608]: E1009 01:08:59.604694 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af"} Oct 9 01:08:59.605199 kubelet[2608]: E1009 01:08:59.604899 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:08:59.605199 kubelet[2608]: E1009 01:08:59.604920 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5"} Oct 9 01:08:59.605199 kubelet[2608]: E1009 01:08:59.604964 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2834f708-e1ad-459c-8c3f-1cb2de9ef7de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:08:59.605199 kubelet[2608]: E1009 01:08:59.604996 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2834f708-e1ad-459c-8c3f-1cb2de9ef7de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cmfsl" podUID="2834f708-e1ad-459c-8c3f-1cb2de9ef7de" Oct 9 01:08:59.605552 kubelet[2608]: E1009 01:08:59.605411 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4782287b-1310-451d-8736-24d2e3baa8fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:08:59.605552 kubelet[2608]: E1009 01:08:59.605506 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4782287b-1310-451d-8736-24d2e3baa8fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-2mdww" podUID="4782287b-1310-451d-8736-24d2e3baa8fe" Oct 9 01:08:59.612187 containerd[1441]: time="2024-10-09T01:08:59.612132503Z" level=error msg="StopPodSandbox for \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\" failed" error="failed to destroy network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:08:59.612511 kubelet[2608]: E1009 01:08:59.612343 2608 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:08:59.612511 kubelet[2608]: E1009 01:08:59.612376 2608 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5"} Oct 9 01:08:59.612511 kubelet[2608]: E1009 01:08:59.612415 2608 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"171922d5-d611-4e67-8c10-daef097d9ad9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:08:59.612511 kubelet[2608]: E1009 01:08:59.612440 2608 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"171922d5-d611-4e67-8c10-daef097d9ad9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w4q2j" podUID="171922d5-d611-4e67-8c10-daef097d9ad9" Oct 9 01:09:00.108860 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:52790.service - OpenSSH per-connection server daemon (10.0.0.1:52790). Oct 9 01:09:00.162071 sshd[3682]: Accepted publickey for core from 10.0.0.1 port 52790 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:00.163656 sshd[3682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:00.169033 systemd-logind[1421]: New session 8 of user core. Oct 9 01:09:00.176202 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:09:00.328355 sshd[3682]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:00.331487 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:52790.service: Deactivated successfully. Oct 9 01:09:00.332966 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:09:00.333936 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:09:00.335413 systemd-logind[1421]: Removed session 8. Oct 9 01:09:01.306363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246503215.mount: Deactivated successfully. Oct 9 01:09:01.565299 containerd[1441]: time="2024-10-09T01:09:01.565169558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 9 01:09:01.568817 containerd[1441]: time="2024-10-09T01:09:01.568752401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.079756081s" Oct 9 01:09:01.568817 containerd[1441]: time="2024-10-09T01:09:01.568785644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 9 01:09:01.575241 containerd[1441]: time="2024-10-09T01:09:01.575155488Z" level=info msg="CreateContainer within sandbox \"325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:09:01.586781 containerd[1441]: time="2024-10-09T01:09:01.586733340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:01.587493 containerd[1441]: time="2024-10-09T01:09:01.587457893Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:01.588012 containerd[1441]: time="2024-10-09T01:09:01.587981706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:01.590547 containerd[1441]: time="2024-10-09T01:09:01.590505362Z" level=info msg="CreateContainer within sandbox \"325dd9721db555e063cc4ca5460109abfcc9771bec3cfa794908bf35c6d82d75\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"031d0e0941e9e9c2cde8d8a4f979c2da4c86a5a4a51d5dcf68f3728ae08dde93\"" Oct 9 01:09:01.590943 containerd[1441]: time="2024-10-09T01:09:01.590918763Z" level=info msg="StartContainer for \"031d0e0941e9e9c2cde8d8a4f979c2da4c86a5a4a51d5dcf68f3728ae08dde93\"" Oct 9 01:09:01.651401 systemd[1]: Started cri-containerd-031d0e0941e9e9c2cde8d8a4f979c2da4c86a5a4a51d5dcf68f3728ae08dde93.scope - libcontainer container 031d0e0941e9e9c2cde8d8a4f979c2da4c86a5a4a51d5dcf68f3728ae08dde93. Oct 9 01:09:01.755128 containerd[1441]: time="2024-10-09T01:09:01.755071772Z" level=info msg="StartContainer for \"031d0e0941e9e9c2cde8d8a4f979c2da4c86a5a4a51d5dcf68f3728ae08dde93\" returns successfully" Oct 9 01:09:01.857236 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:09:01.857473 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:09:02.538377 kubelet[2608]: E1009 01:09:02.538350 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:02.550024 kubelet[2608]: I1009 01:09:02.549990 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-gr8dp" podStartSLOduration=2.700392604 podStartE2EDuration="12.549957546s" podCreationTimestamp="2024-10-09 01:08:50 +0000 UTC" firstStartedPulling="2024-10-09 01:08:51.719476008 +0000 UTC m=+22.418943453" lastFinishedPulling="2024-10-09 01:09:01.56904095 +0000 UTC m=+32.268508395" observedRunningTime="2024-10-09 01:09:02.548976449 +0000 UTC m=+33.248443894" watchObservedRunningTime="2024-10-09 01:09:02.549957546 +0000 UTC m=+33.249424991" Oct 9 01:09:03.344086 kernel: bpftool[3919]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:09:03.520300 systemd-networkd[1364]: vxlan.calico: Link UP Oct 9 01:09:03.520309 systemd-networkd[1364]: vxlan.calico: Gained carrier Oct 9 01:09:03.541507 kubelet[2608]: E1009 01:09:03.540368 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:05.199004 systemd-networkd[1364]: vxlan.calico: Gained IPv6LL Oct 9 01:09:05.360358 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:36010.service - OpenSSH per-connection server daemon (10.0.0.1:36010). Oct 9 01:09:05.398189 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 36010 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:05.400430 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:05.404128 systemd-logind[1421]: New session 9 of user core. Oct 9 01:09:05.409220 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:09:05.531768 sshd[4016]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:05.535319 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:36010.service: Deactivated successfully. Oct 9 01:09:05.536959 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:09:05.538981 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:09:05.539960 systemd-logind[1421]: Removed session 9. Oct 9 01:09:10.397261 containerd[1441]: time="2024-10-09T01:09:10.397206173Z" level=info msg="StopPodSandbox for \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\"" Oct 9 01:09:10.397990 containerd[1441]: time="2024-10-09T01:09:10.397213173Z" level=info msg="StopPodSandbox for \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\"" Oct 9 01:09:10.568376 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:36024.service - OpenSSH per-connection server daemon (10.0.0.1:36024). Oct 9 01:09:10.640159 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 36024 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:10.643160 sshd[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.479 [INFO][4070] k8s.go 608: Cleaning up netns ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.479 [INFO][4070] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" iface="eth0" netns="/var/run/netns/cni-a4fb6c60-7344-124a-d877-085ff3c4ddd5" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.480 [INFO][4070] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" iface="eth0" netns="/var/run/netns/cni-a4fb6c60-7344-124a-d877-085ff3c4ddd5" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.482 [INFO][4070] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" iface="eth0" netns="/var/run/netns/cni-a4fb6c60-7344-124a-d877-085ff3c4ddd5" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.482 [INFO][4070] k8s.go 615: Releasing IP address(es) ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.482 [INFO][4070] utils.go 188: Calico CNI releasing IP address ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.619 [INFO][4085] ipam_plugin.go 417: Releasing address using handleID ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.619 [INFO][4085] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.619 [INFO][4085] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.634 [WARNING][4085] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.634 [INFO][4085] ipam_plugin.go 445: Releasing address using workloadID ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.636 [INFO][4085] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:10.646015 containerd[1441]: 2024-10-09 01:09:10.641 [INFO][4070] k8s.go 621: Teardown processing complete. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:10.646784 containerd[1441]: time="2024-10-09T01:09:10.646732278Z" level=info msg="TearDown network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\" successfully" Oct 9 01:09:10.646784 containerd[1441]: time="2024-10-09T01:09:10.646766321Z" level=info msg="StopPodSandbox for \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\" returns successfully" Oct 9 01:09:10.652306 systemd[1]: run-netns-cni\x2da4fb6c60\x2d7344\x2d124a\x2dd877\x2d085ff3c4ddd5.mount: Deactivated successfully. Oct 9 01:09:10.652739 containerd[1441]: time="2024-10-09T01:09:10.652711193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd6bf9548-fflk7,Uid:7a1c35a3-8044-40a1-816a-48efa14a6135,Namespace:calico-system,Attempt:1,}" Oct 9 01:09:10.656380 systemd-logind[1421]: New session 10 of user core. Oct 9 01:09:10.661427 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:09:10.661908 kubelet[2608]: E1009 01:09:10.659097 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.478 [INFO][4069] k8s.go 608: Cleaning up netns ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.478 [INFO][4069] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" iface="eth0" netns="/var/run/netns/cni-d5884667-445d-a4ab-ba89-e5afdfd4471b" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.479 [INFO][4069] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" iface="eth0" netns="/var/run/netns/cni-d5884667-445d-a4ab-ba89-e5afdfd4471b" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.479 [INFO][4069] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" iface="eth0" netns="/var/run/netns/cni-d5884667-445d-a4ab-ba89-e5afdfd4471b" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.479 [INFO][4069] k8s.go 615: Releasing IP address(es) ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.480 [INFO][4069] utils.go 188: Calico CNI releasing IP address ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.619 [INFO][4084] ipam_plugin.go 417: Releasing address using handleID ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.619 [INFO][4084] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.636 [INFO][4084] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.644 [WARNING][4084] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.644 [INFO][4084] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.651 [INFO][4084] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:10.662168 containerd[1441]: 2024-10-09 01:09:10.654 [INFO][4069] k8s.go 621: Teardown processing complete. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:10.662168 containerd[1441]: time="2024-10-09T01:09:10.658210390Z" level=info msg="TearDown network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\" successfully" Oct 9 01:09:10.662168 containerd[1441]: time="2024-10-09T01:09:10.658231712Z" level=info msg="StopPodSandbox for \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\" returns successfully" Oct 9 01:09:10.662168 containerd[1441]: time="2024-10-09T01:09:10.659909885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cmfsl,Uid:2834f708-e1ad-459c-8c3f-1cb2de9ef7de,Namespace:kube-system,Attempt:1,}" Oct 9 01:09:10.664083 systemd[1]: run-netns-cni\x2dd5884667\x2d445d\x2da4ab\x2dba89\x2de5afdfd4471b.mount: Deactivated successfully. Oct 9 01:09:10.802013 sshd[4099]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:10.808531 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:36024.service: Deactivated successfully. Oct 9 01:09:10.812108 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:09:10.812916 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:09:10.820684 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:36032.service - OpenSSH per-connection server daemon (10.0.0.1:36032). Oct 9 01:09:10.821600 systemd-logind[1421]: Removed session 10. Oct 9 01:09:10.833152 systemd-networkd[1364]: calia5ccc9c2411: Link UP Oct 9 01:09:10.833482 systemd-networkd[1364]: calia5ccc9c2411: Gained carrier Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.724 [INFO][4125] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0 calico-kube-controllers-6dd6bf9548- calico-system 7a1c35a3-8044-40a1-816a-48efa14a6135 823 0 2024-10-09 01:08:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dd6bf9548 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6dd6bf9548-fflk7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia5ccc9c2411 [] []}} ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.724 [INFO][4125] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.757 [INFO][4143] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" HandleID="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.778 [INFO][4143] ipam_plugin.go 270: Auto assigning IP ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" HandleID="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304330), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6dd6bf9548-fflk7", "timestamp":"2024-10-09 01:09:10.757693615 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.779 [INFO][4143] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.779 [INFO][4143] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.779 [INFO][4143] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.782 [INFO][4143] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.802 [INFO][4143] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.809 [INFO][4143] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.810 [INFO][4143] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.813 [INFO][4143] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.813 [INFO][4143] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.815 [INFO][4143] ipam.go 1685: Creating new handle: k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.818 [INFO][4143] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.823 [INFO][4143] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.823 [INFO][4143] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" host="localhost" Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.823 [INFO][4143] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:10.847644 containerd[1441]: 2024-10-09 01:09:10.823 [INFO][4143] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" HandleID="k8s-pod-network.ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.848163 containerd[1441]: 2024-10-09 01:09:10.830 [INFO][4125] k8s.go 386: Populated endpoint ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0", GenerateName:"calico-kube-controllers-6dd6bf9548-", Namespace:"calico-system", SelfLink:"", UID:"7a1c35a3-8044-40a1-816a-48efa14a6135", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd6bf9548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6dd6bf9548-fflk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5ccc9c2411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:10.848163 containerd[1441]: 2024-10-09 01:09:10.830 [INFO][4125] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.848163 containerd[1441]: 2024-10-09 01:09:10.830 [INFO][4125] dataplane_linux.go 68: Setting the host side veth name to calia5ccc9c2411 ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.848163 containerd[1441]: 2024-10-09 01:09:10.833 [INFO][4125] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.848163 containerd[1441]: 2024-10-09 01:09:10.834 [INFO][4125] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0", GenerateName:"calico-kube-controllers-6dd6bf9548-", Namespace:"calico-system", SelfLink:"", UID:"7a1c35a3-8044-40a1-816a-48efa14a6135", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd6bf9548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd", Pod:"calico-kube-controllers-6dd6bf9548-fflk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5ccc9c2411", MAC:"b6:66:f6:15:66:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:10.848163 containerd[1441]: 2024-10-09 01:09:10.844 [INFO][4125] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd" Namespace="calico-system" Pod="calico-kube-controllers-6dd6bf9548-fflk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:10.857152 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 36032 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:10.857883 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:10.863149 systemd-logind[1421]: New session 11 of user core. Oct 9 01:09:10.867860 systemd-networkd[1364]: cali9cc3e955ef5: Link UP Oct 9 01:09:10.868233 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:09:10.868325 systemd-networkd[1364]: cali9cc3e955ef5: Gained carrier Oct 9 01:09:10.878362 containerd[1441]: time="2024-10-09T01:09:10.877725111Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:10.878362 containerd[1441]: time="2024-10-09T01:09:10.877782556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:10.878362 containerd[1441]: time="2024-10-09T01:09:10.877796957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:10.878362 containerd[1441]: time="2024-10-09T01:09:10.877873603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.724 [INFO][4108] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--cmfsl-eth0 coredns-76f75df574- kube-system 2834f708-e1ad-459c-8c3f-1cb2de9ef7de 822 0 2024-10-09 01:08:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-cmfsl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9cc3e955ef5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.724 [INFO][4108] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.758 [INFO][4144] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" HandleID="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.783 [INFO][4144] ipam_plugin.go 270: Auto assigning IP ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" HandleID="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-cmfsl", "timestamp":"2024-10-09 01:09:10.758242858 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.784 [INFO][4144] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.823 [INFO][4144] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.823 [INFO][4144] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.826 [INFO][4144] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.830 [INFO][4144] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.834 [INFO][4144] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.837 [INFO][4144] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.846 [INFO][4144] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.846 [INFO][4144] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.851 [INFO][4144] ipam.go 1685: Creating new handle: k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642 Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.856 [INFO][4144] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.861 [INFO][4144] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.862 [INFO][4144] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" host="localhost" Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.862 [INFO][4144] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:10.883156 containerd[1441]: 2024-10-09 01:09:10.862 [INFO][4144] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" HandleID="k8s-pod-network.b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.883641 containerd[1441]: 2024-10-09 01:09:10.865 [INFO][4108] k8s.go 386: Populated endpoint ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cmfsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2834f708-e1ad-459c-8c3f-1cb2de9ef7de", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-cmfsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cc3e955ef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:10.883641 containerd[1441]: 2024-10-09 01:09:10.865 [INFO][4108] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.883641 containerd[1441]: 2024-10-09 01:09:10.865 [INFO][4108] dataplane_linux.go 68: Setting the host side veth name to cali9cc3e955ef5 ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.883641 containerd[1441]: 2024-10-09 01:09:10.868 [INFO][4108] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.883641 containerd[1441]: 2024-10-09 01:09:10.869 [INFO][4108] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cmfsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2834f708-e1ad-459c-8c3f-1cb2de9ef7de", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642", Pod:"coredns-76f75df574-cmfsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cc3e955ef5", MAC:"26:b4:90:ad:fe:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:10.883641 containerd[1441]: 2024-10-09 01:09:10.878 [INFO][4108] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642" Namespace="kube-system" Pod="coredns-76f75df574-cmfsl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:10.901214 systemd[1]: Started cri-containerd-ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd.scope - libcontainer container ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd. Oct 9 01:09:10.909209 containerd[1441]: time="2024-10-09T01:09:10.908898388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:10.909209 containerd[1441]: time="2024-10-09T01:09:10.908977394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:10.909209 containerd[1441]: time="2024-10-09T01:09:10.908989715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:10.909209 containerd[1441]: time="2024-10-09T01:09:10.909102524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:10.915154 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:09:10.928209 systemd[1]: Started cri-containerd-b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642.scope - libcontainer container b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642. Oct 9 01:09:10.940909 containerd[1441]: time="2024-10-09T01:09:10.940675753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd6bf9548-fflk7,Uid:7a1c35a3-8044-40a1-816a-48efa14a6135,Namespace:calico-system,Attempt:1,} returns sandbox id \"ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd\"" Oct 9 01:09:10.944214 containerd[1441]: time="2024-10-09T01:09:10.944184712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:09:10.945675 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:09:10.963892 containerd[1441]: time="2024-10-09T01:09:10.963860595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cmfsl,Uid:2834f708-e1ad-459c-8c3f-1cb2de9ef7de,Namespace:kube-system,Attempt:1,} returns sandbox id \"b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642\"" Oct 9 01:09:10.964684 kubelet[2608]: E1009 01:09:10.964661 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:10.966894 containerd[1441]: time="2024-10-09T01:09:10.966866714Z" level=info msg="CreateContainer within sandbox \"b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:09:10.979611 containerd[1441]: time="2024-10-09T01:09:10.979561803Z" level=info msg="CreateContainer within sandbox \"b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e0606cd270f392031c96cf624a33c652ebf323006aed29a2056f2df20dd916b\"" Oct 9 01:09:10.980053 containerd[1441]: time="2024-10-09T01:09:10.980032800Z" level=info msg="StartContainer for \"1e0606cd270f392031c96cf624a33c652ebf323006aed29a2056f2df20dd916b\"" Oct 9 01:09:11.012273 systemd[1]: Started cri-containerd-1e0606cd270f392031c96cf624a33c652ebf323006aed29a2056f2df20dd916b.scope - libcontainer container 1e0606cd270f392031c96cf624a33c652ebf323006aed29a2056f2df20dd916b. Oct 9 01:09:11.044408 containerd[1441]: time="2024-10-09T01:09:11.044250308Z" level=info msg="StartContainer for \"1e0606cd270f392031c96cf624a33c652ebf323006aed29a2056f2df20dd916b\" returns successfully" Oct 9 01:09:11.048272 sshd[4162]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:11.056881 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:36032.service: Deactivated successfully. Oct 9 01:09:11.059510 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:09:11.060292 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:09:11.067733 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:36038.service - OpenSSH per-connection server daemon (10.0.0.1:36038). Oct 9 01:09:11.069195 systemd-logind[1421]: Removed session 11. Oct 9 01:09:11.104212 sshd[4317]: Accepted publickey for core from 10.0.0.1 port 36038 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:11.105730 sshd[4317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:11.112840 systemd-logind[1421]: New session 12 of user core. Oct 9 01:09:11.120220 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:09:11.253343 sshd[4317]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:11.256961 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:36038.service: Deactivated successfully. Oct 9 01:09:11.258626 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:09:11.260113 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:09:11.261326 systemd-logind[1421]: Removed session 12. Oct 9 01:09:11.397723 containerd[1441]: time="2024-10-09T01:09:11.397677779Z" level=info msg="StopPodSandbox for \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\"" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.438 [INFO][4356] k8s.go 608: Cleaning up netns ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.438 [INFO][4356] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" iface="eth0" netns="/var/run/netns/cni-c8540e54-b9bd-2cd3-f415-23192676b98d" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.438 [INFO][4356] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" iface="eth0" netns="/var/run/netns/cni-c8540e54-b9bd-2cd3-f415-23192676b98d" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.439 [INFO][4356] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" iface="eth0" netns="/var/run/netns/cni-c8540e54-b9bd-2cd3-f415-23192676b98d" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.439 [INFO][4356] k8s.go 615: Releasing IP address(es) ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.439 [INFO][4356] utils.go 188: Calico CNI releasing IP address ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.458 [INFO][4363] ipam_plugin.go 417: Releasing address using handleID ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.458 [INFO][4363] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.458 [INFO][4363] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.466 [WARNING][4363] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.466 [INFO][4363] ipam_plugin.go 445: Releasing address using workloadID ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.467 [INFO][4363] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:11.470618 containerd[1441]: 2024-10-09 01:09:11.468 [INFO][4356] k8s.go 621: Teardown processing complete. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:11.471126 containerd[1441]: time="2024-10-09T01:09:11.470716816Z" level=info msg="TearDown network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\" successfully" Oct 9 01:09:11.471126 containerd[1441]: time="2024-10-09T01:09:11.470741018Z" level=info msg="StopPodSandbox for \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\" returns successfully" Oct 9 01:09:11.471397 kubelet[2608]: E1009 01:09:11.471162 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:11.472110 containerd[1441]: time="2024-10-09T01:09:11.471713414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2mdww,Uid:4782287b-1310-451d-8736-24d2e3baa8fe,Namespace:kube-system,Attempt:1,}" Oct 9 01:09:11.553559 kubelet[2608]: E1009 01:09:11.553452 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:11.566502 kubelet[2608]: I1009 01:09:11.566109 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cmfsl" podStartSLOduration=26.566072108 podStartE2EDuration="26.566072108s" podCreationTimestamp="2024-10-09 01:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:09:11.565122594 +0000 UTC m=+42.264590039" watchObservedRunningTime="2024-10-09 01:09:11.566072108 +0000 UTC m=+42.265539553" Oct 9 01:09:11.600960 systemd-networkd[1364]: calia29ec39b0b7: Link UP Oct 9 01:09:11.601205 systemd-networkd[1364]: calia29ec39b0b7: Gained carrier Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.523 [INFO][4372] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--2mdww-eth0 coredns-76f75df574- kube-system 4782287b-1310-451d-8736-24d2e3baa8fe 863 0 2024-10-09 01:08:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-2mdww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia29ec39b0b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.523 [INFO][4372] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.548 [INFO][4385] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" HandleID="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.563 [INFO][4385] ipam_plugin.go 270: Auto assigning IP ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" HandleID="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003443c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-2mdww", "timestamp":"2024-10-09 01:09:11.548925415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.564 [INFO][4385] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.564 [INFO][4385] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.564 [INFO][4385] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.568 [INFO][4385] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.573 [INFO][4385] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.581 [INFO][4385] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.583 [INFO][4385] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.586 [INFO][4385] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.586 [INFO][4385] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.587 [INFO][4385] ipam.go 1685: Creating new handle: k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650 Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.591 [INFO][4385] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.595 [INFO][4385] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.595 [INFO][4385] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" host="localhost" Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.595 [INFO][4385] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:11.617566 containerd[1441]: 2024-10-09 01:09:11.595 [INFO][4385] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" HandleID="k8s-pod-network.0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.618294 containerd[1441]: 2024-10-09 01:09:11.597 [INFO][4372] k8s.go 386: Populated endpoint ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2mdww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4782287b-1310-451d-8736-24d2e3baa8fe", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-2mdww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia29ec39b0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:11.618294 containerd[1441]: 2024-10-09 01:09:11.598 [INFO][4372] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.618294 containerd[1441]: 2024-10-09 01:09:11.598 [INFO][4372] dataplane_linux.go 68: Setting the host side veth name to calia29ec39b0b7 ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.618294 containerd[1441]: 2024-10-09 01:09:11.599 [INFO][4372] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.618294 containerd[1441]: 2024-10-09 01:09:11.599 [INFO][4372] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2mdww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4782287b-1310-451d-8736-24d2e3baa8fe", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650", Pod:"coredns-76f75df574-2mdww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia29ec39b0b7", MAC:"16:23:5d:b4:c0:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:11.618294 containerd[1441]: 2024-10-09 01:09:11.614 [INFO][4372] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650" Namespace="kube-system" Pod="coredns-76f75df574-2mdww" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:11.635249 containerd[1441]: time="2024-10-09T01:09:11.635161518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:11.635249 containerd[1441]: time="2024-10-09T01:09:11.635216603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:11.635544 containerd[1441]: time="2024-10-09T01:09:11.635387736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:11.635544 containerd[1441]: time="2024-10-09T01:09:11.635495064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:11.654214 systemd[1]: Started cri-containerd-0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650.scope - libcontainer container 0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650. Oct 9 01:09:11.660376 systemd[1]: run-netns-cni\x2dc8540e54\x2db9bd\x2d2cd3\x2df415\x2d23192676b98d.mount: Deactivated successfully. Oct 9 01:09:11.664683 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:09:11.683554 containerd[1441]: time="2024-10-09T01:09:11.683517477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2mdww,Uid:4782287b-1310-451d-8736-24d2e3baa8fe,Namespace:kube-system,Attempt:1,} returns sandbox id \"0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650\"" Oct 9 01:09:11.684249 kubelet[2608]: E1009 01:09:11.684228 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:11.686714 containerd[1441]: time="2024-10-09T01:09:11.686679243Z" level=info msg="CreateContainer within sandbox \"0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:09:11.697558 containerd[1441]: time="2024-10-09T01:09:11.697509524Z" level=info msg="CreateContainer within sandbox \"0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0cab43b5f82c0846ee3d247616d9111188635a4f80c56fafb50d2d8bbf4fed32\"" Oct 9 01:09:11.698159 containerd[1441]: time="2024-10-09T01:09:11.698138213Z" level=info msg="StartContainer for \"0cab43b5f82c0846ee3d247616d9111188635a4f80c56fafb50d2d8bbf4fed32\"" Oct 9 01:09:11.729251 systemd[1]: Started cri-containerd-0cab43b5f82c0846ee3d247616d9111188635a4f80c56fafb50d2d8bbf4fed32.scope - libcontainer container 0cab43b5f82c0846ee3d247616d9111188635a4f80c56fafb50d2d8bbf4fed32. Oct 9 01:09:11.752749 containerd[1441]: time="2024-10-09T01:09:11.752701614Z" level=info msg="StartContainer for \"0cab43b5f82c0846ee3d247616d9111188635a4f80c56fafb50d2d8bbf4fed32\" returns successfully" Oct 9 01:09:12.272713 containerd[1441]: time="2024-10-09T01:09:12.272657031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:12.273091 containerd[1441]: time="2024-10-09T01:09:12.273033659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 9 01:09:12.274084 containerd[1441]: time="2024-10-09T01:09:12.274040136Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:12.276284 containerd[1441]: time="2024-10-09T01:09:12.276250184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:12.276838 containerd[1441]: time="2024-10-09T01:09:12.276800786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.332581431s" Oct 9 01:09:12.276877 containerd[1441]: time="2024-10-09T01:09:12.276836389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 9 01:09:12.288334 containerd[1441]: time="2024-10-09T01:09:12.288288100Z" level=info msg="CreateContainer within sandbox \"ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:09:12.301936 containerd[1441]: time="2024-10-09T01:09:12.301895376Z" level=info msg="CreateContainer within sandbox \"ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0d3195cff2f38a7cb971e9f345f4ddf924ee545ee0c43279155c0b03e8f4a7c0\"" Oct 9 01:09:12.302330 containerd[1441]: time="2024-10-09T01:09:12.302293406Z" level=info msg="StartContainer for \"0d3195cff2f38a7cb971e9f345f4ddf924ee545ee0c43279155c0b03e8f4a7c0\"" Oct 9 01:09:12.339220 systemd[1]: Started cri-containerd-0d3195cff2f38a7cb971e9f345f4ddf924ee545ee0c43279155c0b03e8f4a7c0.scope - libcontainer container 0d3195cff2f38a7cb971e9f345f4ddf924ee545ee0c43279155c0b03e8f4a7c0. Oct 9 01:09:12.356233 systemd-networkd[1364]: cali9cc3e955ef5: Gained IPv6LL Oct 9 01:09:12.373365 containerd[1441]: time="2024-10-09T01:09:12.373305131Z" level=info msg="StartContainer for \"0d3195cff2f38a7cb971e9f345f4ddf924ee545ee0c43279155c0b03e8f4a7c0\" returns successfully" Oct 9 01:09:12.567253 kubelet[2608]: E1009 01:09:12.565492 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:12.567733 kubelet[2608]: E1009 01:09:12.567630 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:12.574210 kubelet[2608]: I1009 01:09:12.573675 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dd6bf9548-fflk7" podStartSLOduration=21.239252645 podStartE2EDuration="22.573636419s" podCreationTimestamp="2024-10-09 01:08:50 +0000 UTC" firstStartedPulling="2024-10-09 01:09:10.94265847 +0000 UTC m=+41.642125875" lastFinishedPulling="2024-10-09 01:09:12.277042204 +0000 UTC m=+42.976509649" observedRunningTime="2024-10-09 01:09:12.572258674 +0000 UTC m=+43.271726119" watchObservedRunningTime="2024-10-09 01:09:12.573636419 +0000 UTC m=+43.273103864" Oct 9 01:09:12.584451 kubelet[2608]: I1009 01:09:12.584410 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2mdww" podStartSLOduration=27.584190222 podStartE2EDuration="27.584190222s" podCreationTimestamp="2024-10-09 01:08:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:09:12.582836039 +0000 UTC m=+43.282303444" watchObservedRunningTime="2024-10-09 01:09:12.584190222 +0000 UTC m=+43.283657667" Oct 9 01:09:12.741231 systemd-networkd[1364]: calia5ccc9c2411: Gained IPv6LL Oct 9 01:09:13.568788 kubelet[2608]: E1009 01:09:13.568445 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:13.568788 kubelet[2608]: E1009 01:09:13.568600 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:13.579182 systemd-networkd[1364]: calia29ec39b0b7: Gained IPv6LL Oct 9 01:09:14.398384 containerd[1441]: time="2024-10-09T01:09:14.398315936Z" level=info msg="StopPodSandbox for \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\"" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.445 [INFO][4589] k8s.go 608: Cleaning up netns ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.445 [INFO][4589] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" iface="eth0" netns="/var/run/netns/cni-d622ade5-91e9-2486-b20a-c5777ada5fc0" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.445 [INFO][4589] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" iface="eth0" netns="/var/run/netns/cni-d622ade5-91e9-2486-b20a-c5777ada5fc0" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.445 [INFO][4589] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" iface="eth0" netns="/var/run/netns/cni-d622ade5-91e9-2486-b20a-c5777ada5fc0" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.445 [INFO][4589] k8s.go 615: Releasing IP address(es) ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.446 [INFO][4589] utils.go 188: Calico CNI releasing IP address ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.464 [INFO][4597] ipam_plugin.go 417: Releasing address using handleID ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.465 [INFO][4597] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.465 [INFO][4597] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.473 [WARNING][4597] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.473 [INFO][4597] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.474 [INFO][4597] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:14.477737 containerd[1441]: 2024-10-09 01:09:14.475 [INFO][4589] k8s.go 621: Teardown processing complete. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:14.480353 containerd[1441]: time="2024-10-09T01:09:14.477871398Z" level=info msg="TearDown network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\" successfully" Oct 9 01:09:14.480353 containerd[1441]: time="2024-10-09T01:09:14.477896559Z" level=info msg="StopPodSandbox for \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\" returns successfully" Oct 9 01:09:14.480353 containerd[1441]: time="2024-10-09T01:09:14.480118042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4q2j,Uid:171922d5-d611-4e67-8c10-daef097d9ad9,Namespace:calico-system,Attempt:1,}" Oct 9 01:09:14.481199 systemd[1]: run-netns-cni\x2dd622ade5\x2d91e9\x2d2486\x2db20a\x2dc5777ada5fc0.mount: Deactivated successfully. Oct 9 01:09:14.570002 kubelet[2608]: E1009 01:09:14.569959 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:14.587453 systemd-networkd[1364]: califdc96a40002: Link UP Oct 9 01:09:14.588231 systemd-networkd[1364]: califdc96a40002: Gained carrier Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.523 [INFO][4604] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w4q2j-eth0 csi-node-driver- calico-system 171922d5-d611-4e67-8c10-daef097d9ad9 920 0 2024-10-09 01:08:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-w4q2j eth0 default [] [] [kns.calico-system ksa.calico-system.default] califdc96a40002 [] []}} ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.524 [INFO][4604] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.547 [INFO][4617] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" HandleID="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.557 [INFO][4617] ipam_plugin.go 270: Auto assigning IP ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" HandleID="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027a170), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w4q2j", "timestamp":"2024-10-09 01:09:14.547856319 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.557 [INFO][4617] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.558 [INFO][4617] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.558 [INFO][4617] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.559 [INFO][4617] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.562 [INFO][4617] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.566 [INFO][4617] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.568 [INFO][4617] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.571 [INFO][4617] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.571 [INFO][4617] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.572 [INFO][4617] ipam.go 1685: Creating new handle: k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358 Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.575 [INFO][4617] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.583 [INFO][4617] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.583 [INFO][4617] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" host="localhost" Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.583 [INFO][4617] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:14.600619 containerd[1441]: 2024-10-09 01:09:14.583 [INFO][4617] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" HandleID="k8s-pod-network.092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.601141 containerd[1441]: 2024-10-09 01:09:14.585 [INFO][4604] k8s.go 386: Populated endpoint ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4q2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"171922d5-d611-4e67-8c10-daef097d9ad9", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w4q2j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califdc96a40002", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:14.601141 containerd[1441]: 2024-10-09 01:09:14.585 [INFO][4604] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.601141 containerd[1441]: 2024-10-09 01:09:14.585 [INFO][4604] dataplane_linux.go 68: Setting the host side veth name to califdc96a40002 ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.601141 containerd[1441]: 2024-10-09 01:09:14.588 [INFO][4604] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.601141 containerd[1441]: 2024-10-09 01:09:14.588 [INFO][4604] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4q2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"171922d5-d611-4e67-8c10-daef097d9ad9", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358", Pod:"csi-node-driver-w4q2j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califdc96a40002", MAC:"2a:6a:70:9b:42:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:14.601141 containerd[1441]: 2024-10-09 01:09:14.597 [INFO][4604] k8s.go 500: Wrote updated endpoint to datastore ContainerID="092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358" Namespace="calico-system" Pod="csi-node-driver-w4q2j" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:14.621942 containerd[1441]: time="2024-10-09T01:09:14.621306734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:14.621942 containerd[1441]: time="2024-10-09T01:09:14.621378059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:14.621942 containerd[1441]: time="2024-10-09T01:09:14.621393180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:14.621942 containerd[1441]: time="2024-10-09T01:09:14.621477986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:14.645213 systemd[1]: Started cri-containerd-092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358.scope - libcontainer container 092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358. Oct 9 01:09:14.653810 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:09:14.662148 containerd[1441]: time="2024-10-09T01:09:14.662113040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4q2j,Uid:171922d5-d611-4e67-8c10-daef097d9ad9,Namespace:calico-system,Attempt:1,} returns sandbox id \"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358\"" Oct 9 01:09:14.664127 containerd[1441]: time="2024-10-09T01:09:14.663963935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:09:15.486588 containerd[1441]: time="2024-10-09T01:09:15.486527040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:15.487692 containerd[1441]: time="2024-10-09T01:09:15.487651361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 9 01:09:15.488837 containerd[1441]: time="2024-10-09T01:09:15.488799363Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:15.491146 containerd[1441]: time="2024-10-09T01:09:15.491105009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:15.492052 containerd[1441]: time="2024-10-09T01:09:15.492017474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 827.898448ms" Oct 9 01:09:15.492091 containerd[1441]: time="2024-10-09T01:09:15.492052797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 9 01:09:15.493965 containerd[1441]: time="2024-10-09T01:09:15.493928292Z" level=info msg="CreateContainer within sandbox \"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:09:15.555168 containerd[1441]: time="2024-10-09T01:09:15.555122328Z" level=info msg="CreateContainer within sandbox \"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bd250f9c0c49c4681ea603609e66b3507b56bb9256e1bd77430322df5d7d81f1\"" Oct 9 01:09:15.556203 containerd[1441]: time="2024-10-09T01:09:15.556176324Z" level=info msg="StartContainer for \"bd250f9c0c49c4681ea603609e66b3507b56bb9256e1bd77430322df5d7d81f1\"" Oct 9 01:09:15.573020 kubelet[2608]: E1009 01:09:15.572919 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:15.591224 systemd[1]: Started cri-containerd-bd250f9c0c49c4681ea603609e66b3507b56bb9256e1bd77430322df5d7d81f1.scope - libcontainer container bd250f9c0c49c4681ea603609e66b3507b56bb9256e1bd77430322df5d7d81f1. Oct 9 01:09:15.619917 containerd[1441]: time="2024-10-09T01:09:15.619878260Z" level=info msg="StartContainer for \"bd250f9c0c49c4681ea603609e66b3507b56bb9256e1bd77430322df5d7d81f1\" returns successfully" Oct 9 01:09:15.621385 containerd[1441]: time="2024-10-09T01:09:15.621186554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:09:16.270584 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:36412.service - OpenSSH per-connection server daemon (10.0.0.1:36412). Oct 9 01:09:16.317243 sshd[4723]: Accepted publickey for core from 10.0.0.1 port 36412 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:16.319868 sshd[4723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:16.325726 systemd-logind[1421]: New session 13 of user core. Oct 9 01:09:16.330410 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:09:16.516388 systemd-networkd[1364]: califdc96a40002: Gained IPv6LL Oct 9 01:09:16.595794 sshd[4723]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:16.604666 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:36412.service: Deactivated successfully. Oct 9 01:09:16.606527 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:09:16.608321 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:09:16.614351 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:36424.service - OpenSSH per-connection server daemon (10.0.0.1:36424). Oct 9 01:09:16.615712 systemd-logind[1421]: Removed session 13. Oct 9 01:09:16.631254 containerd[1441]: time="2024-10-09T01:09:16.631207531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:16.632021 containerd[1441]: time="2024-10-09T01:09:16.631966944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 9 01:09:16.633208 containerd[1441]: time="2024-10-09T01:09:16.633184110Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:16.635371 containerd[1441]: time="2024-10-09T01:09:16.635337342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:09:16.636383 containerd[1441]: time="2024-10-09T01:09:16.636335533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.015119816s" Oct 9 01:09:16.636383 containerd[1441]: time="2024-10-09T01:09:16.636374335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 9 01:09:16.639708 containerd[1441]: time="2024-10-09T01:09:16.639658167Z" level=info msg="CreateContainer within sandbox \"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:09:16.648090 sshd[4743]: Accepted publickey for core from 10.0.0.1 port 36424 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:16.648996 sshd[4743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:16.653513 containerd[1441]: time="2024-10-09T01:09:16.653468822Z" level=info msg="CreateContainer within sandbox \"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6c074595641b2f7a93aa7a60d15a4339910a074e6acc0c87c64444237b3f93c0\"" Oct 9 01:09:16.654085 containerd[1441]: time="2024-10-09T01:09:16.654047903Z" level=info msg="StartContainer for \"6c074595641b2f7a93aa7a60d15a4339910a074e6acc0c87c64444237b3f93c0\"" Oct 9 01:09:16.655547 systemd-logind[1421]: New session 14 of user core. Oct 9 01:09:16.662222 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:09:16.690229 systemd[1]: Started cri-containerd-6c074595641b2f7a93aa7a60d15a4339910a074e6acc0c87c64444237b3f93c0.scope - libcontainer container 6c074595641b2f7a93aa7a60d15a4339910a074e6acc0c87c64444237b3f93c0. Oct 9 01:09:16.712302 containerd[1441]: time="2024-10-09T01:09:16.712259692Z" level=info msg="StartContainer for \"6c074595641b2f7a93aa7a60d15a4339910a074e6acc0c87c64444237b3f93c0\" returns successfully" Oct 9 01:09:16.889530 sshd[4743]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:16.902538 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:36424.service: Deactivated successfully. Oct 9 01:09:16.904087 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:09:16.905001 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:09:16.906724 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:36432.service - OpenSSH per-connection server daemon (10.0.0.1:36432). Oct 9 01:09:16.908642 systemd-logind[1421]: Removed session 14. Oct 9 01:09:16.963011 sshd[4794]: Accepted publickey for core from 10.0.0.1 port 36432 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:16.963632 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:16.967755 systemd-logind[1421]: New session 15 of user core. Oct 9 01:09:16.977192 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:09:17.484125 kubelet[2608]: I1009 01:09:17.484090 2608 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:09:17.490091 kubelet[2608]: I1009 01:09:17.489734 2608 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:09:17.596112 kubelet[2608]: I1009 01:09:17.594684 2608 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-w4q2j" podStartSLOduration=25.621480717 podStartE2EDuration="27.594644691s" podCreationTimestamp="2024-10-09 01:08:50 +0000 UTC" firstStartedPulling="2024-10-09 01:09:14.663661593 +0000 UTC m=+45.363128998" lastFinishedPulling="2024-10-09 01:09:16.636825527 +0000 UTC m=+47.336292972" observedRunningTime="2024-10-09 01:09:17.594529203 +0000 UTC m=+48.293996648" watchObservedRunningTime="2024-10-09 01:09:17.594644691 +0000 UTC m=+48.294112136" Oct 9 01:09:18.431340 sshd[4794]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:18.439610 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:36432.service: Deactivated successfully. Oct 9 01:09:18.442341 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:09:18.449820 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:09:18.465709 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:36440.service - OpenSSH per-connection server daemon (10.0.0.1:36440). Oct 9 01:09:18.469212 systemd-logind[1421]: Removed session 15. Oct 9 01:09:18.508110 sshd[4813]: Accepted publickey for core from 10.0.0.1 port 36440 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:18.509014 sshd[4813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:18.513104 systemd-logind[1421]: New session 16 of user core. Oct 9 01:09:18.519200 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:09:18.769525 sshd[4813]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:18.778675 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:36440.service: Deactivated successfully. Oct 9 01:09:18.780188 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:09:18.781912 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:09:18.789339 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:36442.service - OpenSSH per-connection server daemon (10.0.0.1:36442). Oct 9 01:09:18.790248 systemd-logind[1421]: Removed session 16. Oct 9 01:09:18.822314 sshd[4827]: Accepted publickey for core from 10.0.0.1 port 36442 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:18.823607 sshd[4827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:18.827612 systemd-logind[1421]: New session 17 of user core. Oct 9 01:09:18.835214 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:09:18.964719 sshd[4827]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:18.968083 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:36442.service: Deactivated successfully. Oct 9 01:09:18.970712 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:09:18.971390 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:09:18.972272 systemd-logind[1421]: Removed session 17. Oct 9 01:09:23.975558 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:34444.service - OpenSSH per-connection server daemon (10.0.0.1:34444). Oct 9 01:09:24.011640 sshd[4857]: Accepted publickey for core from 10.0.0.1 port 34444 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:24.012827 sshd[4857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:24.016101 systemd-logind[1421]: New session 18 of user core. Oct 9 01:09:24.024210 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:09:24.169694 sshd[4857]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:24.172424 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:34444.service: Deactivated successfully. Oct 9 01:09:24.174521 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:09:24.175837 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:09:24.176753 systemd-logind[1421]: Removed session 18. Oct 9 01:09:29.181633 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:34448.service - OpenSSH per-connection server daemon (10.0.0.1:34448). Oct 9 01:09:29.218501 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 34448 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:29.219678 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:29.222895 systemd-logind[1421]: New session 19 of user core. Oct 9 01:09:29.233207 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 9 01:09:29.374237 containerd[1441]: time="2024-10-09T01:09:29.374200480Z" level=info msg="StopPodSandbox for \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\"" Oct 9 01:09:29.375293 sshd[4879]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:29.379185 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:34448.service: Deactivated successfully. Oct 9 01:09:29.380832 systemd[1]: session-19.scope: Deactivated successfully. Oct 9 01:09:29.381951 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Oct 9 01:09:29.383106 systemd-logind[1421]: Removed session 19. Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.413 [WARNING][4907] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0", GenerateName:"calico-kube-controllers-6dd6bf9548-", Namespace:"calico-system", SelfLink:"", UID:"7a1c35a3-8044-40a1-816a-48efa14a6135", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd6bf9548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd", Pod:"calico-kube-controllers-6dd6bf9548-fflk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5ccc9c2411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.413 [INFO][4907] k8s.go 608: Cleaning up netns ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.413 [INFO][4907] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" iface="eth0" netns="" Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.413 [INFO][4907] k8s.go 615: Releasing IP address(es) ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.413 [INFO][4907] utils.go 188: Calico CNI releasing IP address ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.435 [INFO][4917] ipam_plugin.go 417: Releasing address using handleID ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.435 [INFO][4917] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.435 [INFO][4917] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.443 [WARNING][4917] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.443 [INFO][4917] ipam_plugin.go 445: Releasing address using workloadID ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.444 [INFO][4917] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.448155 containerd[1441]: 2024-10-09 01:09:29.446 [INFO][4907] k8s.go 621: Teardown processing complete. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.448686 containerd[1441]: time="2024-10-09T01:09:29.448589061Z" level=info msg="TearDown network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\" successfully" Oct 9 01:09:29.448686 containerd[1441]: time="2024-10-09T01:09:29.448616462Z" level=info msg="StopPodSandbox for \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\" returns successfully" Oct 9 01:09:29.449532 containerd[1441]: time="2024-10-09T01:09:29.449098571Z" level=info msg="RemovePodSandbox for \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\"" Oct 9 01:09:29.452701 containerd[1441]: time="2024-10-09T01:09:29.452548538Z" level=info msg="Forcibly stopping sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\"" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.489 [WARNING][4940] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0", GenerateName:"calico-kube-controllers-6dd6bf9548-", Namespace:"calico-system", SelfLink:"", UID:"7a1c35a3-8044-40a1-816a-48efa14a6135", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd6bf9548", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ed2a452619ad95500a10c4fed72e511da964d442168899d7b65e6a42ca5d05dd", Pod:"calico-kube-controllers-6dd6bf9548-fflk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5ccc9c2411", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.489 [INFO][4940] k8s.go 608: Cleaning up netns ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.489 [INFO][4940] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" iface="eth0" netns="" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.489 [INFO][4940] k8s.go 615: Releasing IP address(es) ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.489 [INFO][4940] utils.go 188: Calico CNI releasing IP address ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.508 [INFO][4947] ipam_plugin.go 417: Releasing address using handleID ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.508 [INFO][4947] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.508 [INFO][4947] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.516 [WARNING][4947] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.516 [INFO][4947] ipam_plugin.go 445: Releasing address using workloadID ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" HandleID="k8s-pod-network.998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Workload="localhost-k8s-calico--kube--controllers--6dd6bf9548--fflk7-eth0" Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.518 [INFO][4947] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.522288 containerd[1441]: 2024-10-09 01:09:29.520 [INFO][4940] k8s.go 621: Teardown processing complete. ContainerID="998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34" Oct 9 01:09:29.523578 containerd[1441]: time="2024-10-09T01:09:29.522759588Z" level=info msg="TearDown network for sandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\" successfully" Oct 9 01:09:29.526716 containerd[1441]: time="2024-10-09T01:09:29.526514893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:09:29.526716 containerd[1441]: time="2024-10-09T01:09:29.526576097Z" level=info msg="RemovePodSandbox \"998b4382d91a09def88aa3b954db69a5337652bb87eaf66ca0cde4632dfaac34\" returns successfully" Oct 9 01:09:29.527176 containerd[1441]: time="2024-10-09T01:09:29.527104808Z" level=info msg="StopPodSandbox for \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\"" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.561 [WARNING][4969] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4q2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"171922d5-d611-4e67-8c10-daef097d9ad9", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358", Pod:"csi-node-driver-w4q2j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califdc96a40002", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.561 [INFO][4969] k8s.go 608: Cleaning up netns ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.561 [INFO][4969] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" iface="eth0" netns="" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.561 [INFO][4969] k8s.go 615: Releasing IP address(es) ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.561 [INFO][4969] utils.go 188: Calico CNI releasing IP address ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.579 [INFO][4977] ipam_plugin.go 417: Releasing address using handleID ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.579 [INFO][4977] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.579 [INFO][4977] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.587 [WARNING][4977] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.587 [INFO][4977] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.589 [INFO][4977] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.592496 containerd[1441]: 2024-10-09 01:09:29.590 [INFO][4969] k8s.go 621: Teardown processing complete. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.592496 containerd[1441]: time="2024-10-09T01:09:29.592453566Z" level=info msg="TearDown network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\" successfully" Oct 9 01:09:29.593491 containerd[1441]: time="2024-10-09T01:09:29.592477528Z" level=info msg="StopPodSandbox for \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\" returns successfully" Oct 9 01:09:29.593990 containerd[1441]: time="2024-10-09T01:09:29.593899173Z" level=info msg="RemovePodSandbox for \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\"" Oct 9 01:09:29.593990 containerd[1441]: time="2024-10-09T01:09:29.593934055Z" level=info msg="Forcibly stopping sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\"" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.633 [WARNING][5000] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4q2j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"171922d5-d611-4e67-8c10-daef097d9ad9", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"092ad1949868903222e93714ea424889e9c5964599f8c705c10c417fb0a97358", Pod:"csi-node-driver-w4q2j", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califdc96a40002", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.633 [INFO][5000] k8s.go 608: Cleaning up netns ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.633 [INFO][5000] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" iface="eth0" netns="" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.633 [INFO][5000] k8s.go 615: Releasing IP address(es) ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.633 [INFO][5000] utils.go 188: Calico CNI releasing IP address ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.652 [INFO][5009] ipam_plugin.go 417: Releasing address using handleID ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.652 [INFO][5009] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.652 [INFO][5009] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.660 [WARNING][5009] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.660 [INFO][5009] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" HandleID="k8s-pod-network.c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Workload="localhost-k8s-csi--node--driver--w4q2j-eth0" Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.661 [INFO][5009] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.665617 containerd[1441]: 2024-10-09 01:09:29.663 [INFO][5000] k8s.go 621: Teardown processing complete. ContainerID="c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5" Oct 9 01:09:29.666711 containerd[1441]: time="2024-10-09T01:09:29.666052339Z" level=info msg="TearDown network for sandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\" successfully" Oct 9 01:09:29.669088 containerd[1441]: time="2024-10-09T01:09:29.669030038Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:09:29.669267 containerd[1441]: time="2024-10-09T01:09:29.669246491Z" level=info msg="RemovePodSandbox \"c16f65163a107173f6dc563504c2fbc96e662832bed021dd344099988c3824b5\" returns successfully" Oct 9 01:09:29.669935 containerd[1441]: time="2024-10-09T01:09:29.669913931Z" level=info msg="StopPodSandbox for \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\"" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.704 [WARNING][5032] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2mdww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4782287b-1310-451d-8736-24d2e3baa8fe", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650", Pod:"coredns-76f75df574-2mdww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia29ec39b0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.704 [INFO][5032] k8s.go 608: Cleaning up netns ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.704 [INFO][5032] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" iface="eth0" netns="" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.704 [INFO][5032] k8s.go 615: Releasing IP address(es) ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.704 [INFO][5032] utils.go 188: Calico CNI releasing IP address ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.723 [INFO][5039] ipam_plugin.go 417: Releasing address using handleID ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.723 [INFO][5039] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.723 [INFO][5039] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.730 [WARNING][5039] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.730 [INFO][5039] ipam_plugin.go 445: Releasing address using workloadID ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.732 [INFO][5039] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.736018 containerd[1441]: 2024-10-09 01:09:29.733 [INFO][5032] k8s.go 621: Teardown processing complete. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.736018 containerd[1441]: time="2024-10-09T01:09:29.735840484Z" level=info msg="TearDown network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\" successfully" Oct 9 01:09:29.736018 containerd[1441]: time="2024-10-09T01:09:29.735861325Z" level=info msg="StopPodSandbox for \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\" returns successfully" Oct 9 01:09:29.737305 containerd[1441]: time="2024-10-09T01:09:29.736980912Z" level=info msg="RemovePodSandbox for \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\"" Oct 9 01:09:29.737305 containerd[1441]: time="2024-10-09T01:09:29.737006274Z" level=info msg="Forcibly stopping sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\"" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.770 [WARNING][5062] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--2mdww-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4782287b-1310-451d-8736-24d2e3baa8fe", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fe95d6751a220af7ad356afe3f51f2297a356ed1a7fccc105cf0768051a6650", Pod:"coredns-76f75df574-2mdww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia29ec39b0b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.770 [INFO][5062] k8s.go 608: Cleaning up netns ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.770 [INFO][5062] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" iface="eth0" netns="" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.770 [INFO][5062] k8s.go 615: Releasing IP address(es) ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.770 [INFO][5062] utils.go 188: Calico CNI releasing IP address ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.788 [INFO][5070] ipam_plugin.go 417: Releasing address using handleID ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.788 [INFO][5070] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.788 [INFO][5070] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.796 [WARNING][5070] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.796 [INFO][5070] ipam_plugin.go 445: Releasing address using workloadID ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" HandleID="k8s-pod-network.587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Workload="localhost-k8s-coredns--76f75df574--2mdww-eth0" Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.797 [INFO][5070] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.801381 containerd[1441]: 2024-10-09 01:09:29.799 [INFO][5062] k8s.go 621: Teardown processing complete. ContainerID="587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af" Oct 9 01:09:29.802371 containerd[1441]: time="2024-10-09T01:09:29.801533143Z" level=info msg="TearDown network for sandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\" successfully" Oct 9 01:09:29.804499 containerd[1441]: time="2024-10-09T01:09:29.804469679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:09:29.804655 containerd[1441]: time="2024-10-09T01:09:29.804639569Z" level=info msg="RemovePodSandbox \"587761316912dcaacf2fb38f88a478f303120df16285bd58caa507774a3732af\" returns successfully" Oct 9 01:09:29.805193 containerd[1441]: time="2024-10-09T01:09:29.805162920Z" level=info msg="StopPodSandbox for \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\"" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.838 [WARNING][5091] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cmfsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2834f708-e1ad-459c-8c3f-1cb2de9ef7de", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642", Pod:"coredns-76f75df574-cmfsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cc3e955ef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.838 [INFO][5091] k8s.go 608: Cleaning up netns ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.838 [INFO][5091] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" iface="eth0" netns="" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.838 [INFO][5091] k8s.go 615: Releasing IP address(es) ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.838 [INFO][5091] utils.go 188: Calico CNI releasing IP address ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.856 [INFO][5099] ipam_plugin.go 417: Releasing address using handleID ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.857 [INFO][5099] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.857 [INFO][5099] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.865 [WARNING][5099] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.865 [INFO][5099] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.868 [INFO][5099] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.872746 containerd[1441]: 2024-10-09 01:09:29.871 [INFO][5091] k8s.go 621: Teardown processing complete. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.873377 containerd[1441]: time="2024-10-09T01:09:29.873244962Z" level=info msg="TearDown network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\" successfully" Oct 9 01:09:29.873377 containerd[1441]: time="2024-10-09T01:09:29.873275044Z" level=info msg="StopPodSandbox for \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\" returns successfully" Oct 9 01:09:29.874044 containerd[1441]: time="2024-10-09T01:09:29.873712030Z" level=info msg="RemovePodSandbox for \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\"" Oct 9 01:09:29.874044 containerd[1441]: time="2024-10-09T01:09:29.873744232Z" level=info msg="Forcibly stopping sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\"" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.906 [WARNING][5121] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--cmfsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2834f708-e1ad-459c-8c3f-1cb2de9ef7de", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 8, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b6519d4297e54c4c57fb547c64fc94fc7caf1793896667e6352414eb25b7d642", Pod:"coredns-76f75df574-cmfsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cc3e955ef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.906 [INFO][5121] k8s.go 608: Cleaning up netns ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.906 [INFO][5121] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" iface="eth0" netns="" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.906 [INFO][5121] k8s.go 615: Releasing IP address(es) ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.906 [INFO][5121] utils.go 188: Calico CNI releasing IP address ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.924 [INFO][5129] ipam_plugin.go 417: Releasing address using handleID ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.924 [INFO][5129] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.924 [INFO][5129] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.932 [WARNING][5129] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.932 [INFO][5129] ipam_plugin.go 445: Releasing address using workloadID ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" HandleID="k8s-pod-network.c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Workload="localhost-k8s-coredns--76f75df574--cmfsl-eth0" Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.933 [INFO][5129] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:29.937398 containerd[1441]: 2024-10-09 01:09:29.935 [INFO][5121] k8s.go 621: Teardown processing complete. ContainerID="c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5" Oct 9 01:09:29.937766 containerd[1441]: time="2024-10-09T01:09:29.937432411Z" level=info msg="TearDown network for sandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\" successfully" Oct 9 01:09:29.947147 containerd[1441]: time="2024-10-09T01:09:29.947097351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:09:29.947237 containerd[1441]: time="2024-10-09T01:09:29.947191676Z" level=info msg="RemovePodSandbox \"c77207dc10fee8846fd426831f4addaecf8f564a8c11d0430d3d1e60e9a5d6a5\" returns successfully" Oct 9 01:09:31.327762 kubelet[2608]: E1009 01:09:31.327725 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 9 01:09:34.387558 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:37802.service - OpenSSH per-connection server daemon (10.0.0.1:37802). Oct 9 01:09:34.426948 sshd[5182]: Accepted publickey for core from 10.0.0.1 port 37802 ssh2: RSA SHA256:nRWADPtu01909VH1n4/VEkamAOeuD1sYuu1knWF4jhs Oct 9 01:09:34.429607 sshd[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:34.437123 systemd-logind[1421]: New session 20 of user core. Oct 9 01:09:34.440596 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 9 01:09:34.578009 sshd[5182]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:34.581904 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:37802.service: Deactivated successfully. Oct 9 01:09:34.584000 systemd[1]: session-20.scope: Deactivated successfully. Oct 9 01:09:34.585754 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Oct 9 01:09:34.587764 systemd-logind[1421]: Removed session 20. Oct 9 01:09:35.726653 kubelet[2608]: I1009 01:09:35.726614 2608 topology_manager.go:215] "Topology Admit Handler" podUID="b6f58e4d-4996-4486-b3d1-a56b77bdb95b" podNamespace="calico-apiserver" podName="calico-apiserver-d65db7c8d-qgvmc" Oct 9 01:09:35.735150 systemd[1]: Created slice kubepods-besteffort-podb6f58e4d_4996_4486_b3d1_a56b77bdb95b.slice - libcontainer container kubepods-besteffort-podb6f58e4d_4996_4486_b3d1_a56b77bdb95b.slice. Oct 9 01:09:35.861480 kubelet[2608]: I1009 01:09:35.861237 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6f58e4d-4996-4486-b3d1-a56b77bdb95b-calico-apiserver-certs\") pod \"calico-apiserver-d65db7c8d-qgvmc\" (UID: \"b6f58e4d-4996-4486-b3d1-a56b77bdb95b\") " pod="calico-apiserver/calico-apiserver-d65db7c8d-qgvmc" Oct 9 01:09:35.861480 kubelet[2608]: I1009 01:09:35.861294 2608 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xppdw\" (UniqueName: \"kubernetes.io/projected/b6f58e4d-4996-4486-b3d1-a56b77bdb95b-kube-api-access-xppdw\") pod \"calico-apiserver-d65db7c8d-qgvmc\" (UID: \"b6f58e4d-4996-4486-b3d1-a56b77bdb95b\") " pod="calico-apiserver/calico-apiserver-d65db7c8d-qgvmc" Oct 9 01:09:36.039047 containerd[1441]: time="2024-10-09T01:09:36.038937371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65db7c8d-qgvmc,Uid:b6f58e4d-4996-4486-b3d1-a56b77bdb95b,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:09:36.166104 systemd-networkd[1364]: calida21a2537ac: Link UP Oct 9 01:09:36.166726 systemd-networkd[1364]: calida21a2537ac: Gained carrier Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.089 [INFO][5210] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0 calico-apiserver-d65db7c8d- calico-apiserver b6f58e4d-4996-4486-b3d1-a56b77bdb95b 1109 0 2024-10-09 01:09:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d65db7c8d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d65db7c8d-qgvmc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida21a2537ac [] []}} ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.089 [INFO][5210] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.118 [INFO][5223] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" HandleID="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Workload="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.131 [INFO][5223] ipam_plugin.go 270: Auto assigning IP ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" HandleID="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Workload="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293f20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d65db7c8d-qgvmc", "timestamp":"2024-10-09 01:09:36.118760047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.131 [INFO][5223] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.131 [INFO][5223] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.131 [INFO][5223] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.133 [INFO][5223] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.139 [INFO][5223] ipam.go 372: Looking up existing affinities for host host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.144 [INFO][5223] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.146 [INFO][5223] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.149 [INFO][5223] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.149 [INFO][5223] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.152 [INFO][5223] ipam.go 1685: Creating new handle: k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.155 [INFO][5223] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.161 [INFO][5223] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.161 [INFO][5223] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" host="localhost" Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.161 [INFO][5223] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:09:36.183710 containerd[1441]: 2024-10-09 01:09:36.161 [INFO][5223] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" HandleID="k8s-pod-network.61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Workload="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" Oct 9 01:09:36.184273 containerd[1441]: 2024-10-09 01:09:36.164 [INFO][5210] k8s.go 386: Populated endpoint ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0", GenerateName:"calico-apiserver-d65db7c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6f58e4d-4996-4486-b3d1-a56b77bdb95b", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d65db7c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d65db7c8d-qgvmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida21a2537ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:36.184273 containerd[1441]: 2024-10-09 01:09:36.164 [INFO][5210] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" Oct 9 01:09:36.184273 containerd[1441]: 2024-10-09 01:09:36.164 [INFO][5210] dataplane_linux.go 68: Setting the host side veth name to calida21a2537ac ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" Oct 9 01:09:36.184273 containerd[1441]: 2024-10-09 01:09:36.166 [INFO][5210] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" Oct 9 01:09:36.184273 containerd[1441]: 2024-10-09 01:09:36.169 [INFO][5210] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0", GenerateName:"calico-apiserver-d65db7c8d-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6f58e4d-4996-4486-b3d1-a56b77bdb95b", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 9, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d65db7c8d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b", Pod:"calico-apiserver-d65db7c8d-qgvmc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida21a2537ac", MAC:"86:42:b8:7c:2d:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:09:36.184273 containerd[1441]: 2024-10-09 01:09:36.179 [INFO][5210] k8s.go 500: Wrote updated endpoint to datastore ContainerID="61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b" Namespace="calico-apiserver" Pod="calico-apiserver-d65db7c8d-qgvmc" WorkloadEndpoint="localhost-k8s-calico--apiserver--d65db7c8d--qgvmc-eth0" Oct 9 01:09:36.205134 containerd[1441]: time="2024-10-09T01:09:36.204859699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:09:36.205134 containerd[1441]: time="2024-10-09T01:09:36.204916377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:09:36.205134 containerd[1441]: time="2024-10-09T01:09:36.204931096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:36.205134 containerd[1441]: time="2024-10-09T01:09:36.205018853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:09:36.222270 systemd[1]: Started cri-containerd-61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b.scope - libcontainer container 61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b. Oct 9 01:09:36.233282 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 9 01:09:36.256403 containerd[1441]: time="2024-10-09T01:09:36.256311226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d65db7c8d-qgvmc,Uid:b6f58e4d-4996-4486-b3d1-a56b77bdb95b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"61fe4087c22d1f347b19b252e0067a31b94209efbe1c2a16c3b6690709ff3c9b\"" Oct 9 01:09:36.258187 containerd[1441]: time="2024-10-09T01:09:36.257973086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\""