Aug 5 22:07:10.900407 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 22:07:10.900429 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:37:57 -00 2024 Aug 5 22:07:10.900440 kernel: KASLR enabled Aug 5 22:07:10.900446 kernel: efi: EFI v2.7 by EDK II Aug 5 22:07:10.900452 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 5 22:07:10.900458 kernel: random: crng init done Aug 5 22:07:10.900465 kernel: ACPI: Early table checksum verification disabled Aug 5 22:07:10.900472 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 5 22:07:10.900479 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 5 22:07:10.900487 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900493 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900500 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900506 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900513 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900520 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900528 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900535 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900542 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 5 22:07:10.900549 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 5 22:07:10.900556 kernel: NUMA: Failed to initialise from firmware Aug 5 22:07:10.900563 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:07:10.900570 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 5 22:07:10.900576 kernel: Zone ranges: Aug 5 22:07:10.900583 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:07:10.900590 kernel: DMA32 empty Aug 5 22:07:10.900598 kernel: Normal empty Aug 5 22:07:10.900605 kernel: Movable zone start for each node Aug 5 22:07:10.900611 kernel: Early memory node ranges Aug 5 22:07:10.900618 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 5 22:07:10.900625 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 5 22:07:10.900632 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 5 22:07:10.900639 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 5 22:07:10.900645 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 5 22:07:10.900652 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 5 22:07:10.900659 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 5 22:07:10.900666 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 5 22:07:10.900673 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 5 22:07:10.900681 kernel: psci: probing for conduit method from ACPI. Aug 5 22:07:10.900688 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 22:07:10.900694 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 22:07:10.900704 kernel: psci: Trusted OS migration not required Aug 5 22:07:10.900711 kernel: psci: SMC Calling Convention v1.1 Aug 5 22:07:10.900719 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 5 22:07:10.900728 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 22:07:10.900735 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 22:07:10.900742 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 5 22:07:10.900750 kernel: Detected PIPT I-cache on CPU0 Aug 5 22:07:10.900757 kernel: CPU features: detected: GIC system register CPU interface Aug 5 22:07:10.900764 kernel: CPU features: detected: Hardware dirty bit management Aug 5 22:07:10.900771 kernel: CPU features: detected: Spectre-v4 Aug 5 22:07:10.900778 kernel: CPU features: detected: Spectre-BHB Aug 5 22:07:10.900786 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 22:07:10.900793 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 22:07:10.900802 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 22:07:10.900809 kernel: alternatives: applying boot alternatives Aug 5 22:07:10.900817 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 22:07:10.900825 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:07:10.900832 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:07:10.900839 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:07:10.900847 kernel: Fallback order for Node 0: 0 Aug 5 22:07:10.900854 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 5 22:07:10.900862 kernel: Policy zone: DMA Aug 5 22:07:10.900869 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:07:10.900876 kernel: software IO TLB: area num 4. Aug 5 22:07:10.900885 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 5 22:07:10.900893 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Aug 5 22:07:10.900900 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 5 22:07:10.900907 kernel: trace event string verifier disabled Aug 5 22:07:10.900914 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:07:10.900922 kernel: rcu: RCU event tracing is enabled. Aug 5 22:07:10.900929 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 5 22:07:10.900937 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:07:10.900944 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:07:10.900951 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:07:10.900959 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 5 22:07:10.900966 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 22:07:10.900975 kernel: GICv3: 256 SPIs implemented Aug 5 22:07:10.900982 kernel: GICv3: 0 Extended SPIs implemented Aug 5 22:07:10.900989 kernel: Root IRQ handler: gic_handle_irq Aug 5 22:07:10.900996 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 22:07:10.901003 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 5 22:07:10.901011 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 5 22:07:10.901028 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 22:07:10.901036 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Aug 5 22:07:10.901043 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 5 22:07:10.901051 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 5 22:07:10.901058 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:07:10.901102 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:07:10.901110 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 22:07:10.901117 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 22:07:10.901125 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 22:07:10.901132 kernel: arm-pv: using stolen time PV Aug 5 22:07:10.901140 kernel: Console: colour dummy device 80x25 Aug 5 22:07:10.901147 kernel: ACPI: Core revision 20230628 Aug 5 22:07:10.901155 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 22:07:10.901162 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:07:10.901169 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:07:10.901178 kernel: SELinux: Initializing. Aug 5 22:07:10.901186 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:07:10.901194 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 22:07:10.901202 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:07:10.901210 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:07:10.901217 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:07:10.901229 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:07:10.901236 kernel: Platform MSI: ITS@0x8080000 domain created Aug 5 22:07:10.901244 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 5 22:07:10.901253 kernel: Remapping and enabling EFI services. Aug 5 22:07:10.901260 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:07:10.901268 kernel: Detected PIPT I-cache on CPU1 Aug 5 22:07:10.901275 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 5 22:07:10.901283 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 5 22:07:10.901290 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:07:10.901297 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 22:07:10.901305 kernel: Detected PIPT I-cache on CPU2 Aug 5 22:07:10.901312 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 5 22:07:10.901320 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 5 22:07:10.901331 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:07:10.901339 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 5 22:07:10.901352 kernel: Detected PIPT I-cache on CPU3 Aug 5 22:07:10.901361 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 5 22:07:10.901369 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 5 22:07:10.901376 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 22:07:10.901384 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 5 22:07:10.901392 kernel: smp: Brought up 1 node, 4 CPUs Aug 5 22:07:10.901400 kernel: SMP: Total of 4 processors activated. Aug 5 22:07:10.901409 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 22:07:10.901417 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 22:07:10.901425 kernel: CPU features: detected: Common not Private translations Aug 5 22:07:10.901433 kernel: CPU features: detected: CRC32 instructions Aug 5 22:07:10.901440 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 5 22:07:10.901448 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 22:07:10.901456 kernel: CPU features: detected: LSE atomic instructions Aug 5 22:07:10.901464 kernel: CPU features: detected: Privileged Access Never Aug 5 22:07:10.901473 kernel: CPU features: detected: RAS Extension Support Aug 5 22:07:10.901481 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 5 22:07:10.901488 kernel: CPU: All CPU(s) started at EL1 Aug 5 22:07:10.901496 kernel: alternatives: applying system-wide alternatives Aug 5 22:07:10.901504 kernel: devtmpfs: initialized Aug 5 22:07:10.901512 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:07:10.901520 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 5 22:07:10.901528 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:07:10.901536 kernel: SMBIOS 3.0.0 present. Aug 5 22:07:10.901545 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 5 22:07:10.901553 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:07:10.901561 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 22:07:10.901569 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 22:07:10.901576 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 22:07:10.901584 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:07:10.901592 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Aug 5 22:07:10.901600 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:07:10.901608 kernel: cpuidle: using governor menu Aug 5 22:07:10.901617 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 22:07:10.901625 kernel: ASID allocator initialised with 32768 entries Aug 5 22:07:10.901633 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:07:10.901640 kernel: Serial: AMBA PL011 UART driver Aug 5 22:07:10.901648 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 22:07:10.901656 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 22:07:10.901664 kernel: Modules: 509120 pages in range for PLT usage Aug 5 22:07:10.901671 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:07:10.901679 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:07:10.901688 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 22:07:10.901696 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 22:07:10.901704 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:07:10.901712 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:07:10.901720 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 22:07:10.901728 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 22:07:10.901735 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:07:10.901743 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:07:10.901751 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:07:10.901760 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:07:10.901768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:07:10.901775 kernel: ACPI: Interpreter enabled Aug 5 22:07:10.901783 kernel: ACPI: Using GIC for interrupt routing Aug 5 22:07:10.901791 kernel: ACPI: MCFG table detected, 1 entries Aug 5 22:07:10.901799 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 5 22:07:10.901807 kernel: printk: console [ttyAMA0] enabled Aug 5 22:07:10.901815 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:07:10.901951 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:07:10.902038 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 22:07:10.902121 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 22:07:10.902193 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 5 22:07:10.902268 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 5 22:07:10.902279 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 5 22:07:10.902287 kernel: PCI host bridge to bus 0000:00 Aug 5 22:07:10.902362 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 5 22:07:10.902430 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 22:07:10.902493 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 5 22:07:10.902555 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:07:10.902638 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 5 22:07:10.902719 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:07:10.902791 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 5 22:07:10.902864 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 5 22:07:10.902934 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:07:10.903005 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 5 22:07:10.903093 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 5 22:07:10.903171 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 5 22:07:10.903236 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 5 22:07:10.903304 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 22:07:10.903475 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 5 22:07:10.903492 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 22:07:10.903501 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 22:07:10.903551 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 22:07:10.903562 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 22:07:10.903570 kernel: iommu: Default domain type: Translated Aug 5 22:07:10.903578 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 22:07:10.903586 kernel: efivars: Registered efivars operations Aug 5 22:07:10.903593 kernel: vgaarb: loaded Aug 5 22:07:10.903607 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 22:07:10.903649 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:07:10.903658 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:07:10.903666 kernel: pnp: PnP ACPI init Aug 5 22:07:10.904036 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 5 22:07:10.904056 kernel: pnp: PnP ACPI: found 1 devices Aug 5 22:07:10.904080 kernel: NET: Registered PF_INET protocol family Aug 5 22:07:10.904088 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:07:10.904211 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 22:07:10.904222 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:07:10.904281 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:07:10.904291 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 22:07:10.904299 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 22:07:10.904307 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:07:10.904315 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 22:07:10.904323 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:07:10.904331 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:07:10.904419 kernel: kvm [1]: HYP mode not available Aug 5 22:07:10.904428 kernel: Initialise system trusted keyrings Aug 5 22:07:10.904436 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 22:07:10.904444 kernel: Key type asymmetric registered Aug 5 22:07:10.904452 kernel: Asymmetric key parser 'x509' registered Aug 5 22:07:10.904460 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 22:07:10.904468 kernel: io scheduler mq-deadline registered Aug 5 22:07:10.904476 kernel: io scheduler kyber registered Aug 5 22:07:10.904523 kernel: io scheduler bfq registered Aug 5 22:07:10.904535 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 22:07:10.904543 kernel: ACPI: button: Power Button [PWRB] Aug 5 22:07:10.904551 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 22:07:10.904659 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 5 22:07:10.904672 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:07:10.904680 kernel: thunder_xcv, ver 1.0 Aug 5 22:07:10.904688 kernel: thunder_bgx, ver 1.0 Aug 5 22:07:10.904696 kernel: nicpf, ver 1.0 Aug 5 22:07:10.904704 kernel: nicvf, ver 1.0 Aug 5 22:07:10.904788 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 22:07:10.904858 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T22:07:10 UTC (1722895630) Aug 5 22:07:10.904869 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:07:10.904877 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 5 22:07:10.904885 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 22:07:10.904893 kernel: watchdog: Hard watchdog permanently disabled Aug 5 22:07:10.904901 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:07:10.904909 kernel: Segment Routing with IPv6 Aug 5 22:07:10.904919 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:07:10.904926 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:07:10.904934 kernel: Key type dns_resolver registered Aug 5 22:07:10.904942 kernel: registered taskstats version 1 Aug 5 22:07:10.904950 kernel: Loading compiled-in X.509 certificates Aug 5 22:07:10.904958 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 99cab5c9e2f0f3a5ca972c2df7b3d6ed64d627d4' Aug 5 22:07:10.904966 kernel: Key type .fscrypt registered Aug 5 22:07:10.904974 kernel: Key type fscrypt-provisioning registered Aug 5 22:07:10.904982 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:07:10.904991 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:07:10.904999 kernel: ima: No architecture policies found Aug 5 22:07:10.905007 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 22:07:10.905022 kernel: clk: Disabling unused clocks Aug 5 22:07:10.905032 kernel: Freeing unused kernel memory: 39040K Aug 5 22:07:10.905040 kernel: Run /init as init process Aug 5 22:07:10.905048 kernel: with arguments: Aug 5 22:07:10.905056 kernel: /init Aug 5 22:07:10.905077 kernel: with environment: Aug 5 22:07:10.905088 kernel: HOME=/ Aug 5 22:07:10.905096 kernel: TERM=linux Aug 5 22:07:10.905104 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:07:10.905114 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:07:10.905124 systemd[1]: Detected virtualization kvm. Aug 5 22:07:10.905133 systemd[1]: Detected architecture arm64. Aug 5 22:07:10.905141 systemd[1]: Running in initrd. Aug 5 22:07:10.905149 systemd[1]: No hostname configured, using default hostname. Aug 5 22:07:10.905159 systemd[1]: Hostname set to . Aug 5 22:07:10.905168 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:07:10.905176 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:07:10.905184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:07:10.905193 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:07:10.905202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:07:10.905211 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:07:10.905220 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:07:10.905230 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:07:10.905240 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:07:10.905249 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:07:10.905258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:07:10.905267 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:07:10.905275 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:07:10.905285 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:07:10.905294 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:07:10.905303 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:07:10.905311 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:07:10.905321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:07:10.905331 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:07:10.905343 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:07:10.905352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:07:10.905360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:07:10.905370 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:07:10.905379 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:07:10.905388 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:07:10.905397 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:07:10.905406 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:07:10.905414 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:07:10.905423 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:07:10.905431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:07:10.905440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:07:10.905451 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:07:10.905459 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:07:10.905468 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:07:10.905477 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:07:10.905506 systemd-journald[237]: Collecting audit messages is disabled. Aug 5 22:07:10.905528 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:07:10.905537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:07:10.905546 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:07:10.905557 systemd-journald[237]: Journal started Aug 5 22:07:10.905576 systemd-journald[237]: Runtime Journal (/run/log/journal/11587c243120416494a95b4da1bd9e68) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:07:10.888416 systemd-modules-load[238]: Inserted module 'overlay' Aug 5 22:07:10.909483 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:07:10.909501 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:07:10.911070 kernel: Bridge firewalling registered Aug 5 22:07:10.911051 systemd-modules-load[238]: Inserted module 'br_netfilter' Aug 5 22:07:10.913721 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:07:10.914204 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:07:10.917721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:07:10.920229 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:07:10.923883 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:07:10.931230 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:07:10.932537 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:07:10.934641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:07:10.945212 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:07:10.947366 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:07:10.956301 dracut-cmdline[276]: dracut-dracut-053 Aug 5 22:07:10.958676 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4052403b8e39e55d48e6afcca927358798017aa0d33c868bc3038260a8d9be90 Aug 5 22:07:10.973903 systemd-resolved[279]: Positive Trust Anchors: Aug 5 22:07:10.973921 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:07:10.973952 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:07:10.978415 systemd-resolved[279]: Defaulting to hostname 'linux'. Aug 5 22:07:10.979386 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:07:10.982669 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:07:11.025084 kernel: SCSI subsystem initialized Aug 5 22:07:11.030086 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:07:11.037108 kernel: iscsi: registered transport (tcp) Aug 5 22:07:11.050095 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:07:11.050136 kernel: QLogic iSCSI HBA Driver Aug 5 22:07:11.094117 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:07:11.102182 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:07:11.121332 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:07:11.121376 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:07:11.121387 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:07:11.169105 kernel: raid6: neonx8 gen() 15741 MB/s Aug 5 22:07:11.186097 kernel: raid6: neonx4 gen() 15615 MB/s Aug 5 22:07:11.203090 kernel: raid6: neonx2 gen() 13249 MB/s Aug 5 22:07:11.220080 kernel: raid6: neonx1 gen() 10456 MB/s Aug 5 22:07:11.237103 kernel: raid6: int64x8 gen() 6947 MB/s Aug 5 22:07:11.254091 kernel: raid6: int64x4 gen() 7341 MB/s Aug 5 22:07:11.271090 kernel: raid6: int64x2 gen() 6111 MB/s Aug 5 22:07:11.288187 kernel: raid6: int64x1 gen() 5047 MB/s Aug 5 22:07:11.288236 kernel: raid6: using algorithm neonx8 gen() 15741 MB/s Aug 5 22:07:11.306171 kernel: raid6: .... xor() 11890 MB/s, rmw enabled Aug 5 22:07:11.306206 kernel: raid6: using neon recovery algorithm Aug 5 22:07:11.311079 kernel: xor: measuring software checksum speed Aug 5 22:07:11.312084 kernel: 8regs : 19854 MB/sec Aug 5 22:07:11.313074 kernel: 32regs : 19725 MB/sec Aug 5 22:07:11.314159 kernel: arm64_neon : 26982 MB/sec Aug 5 22:07:11.314172 kernel: xor: using function: arm64_neon (26982 MB/sec) Aug 5 22:07:11.368096 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:07:11.381774 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:07:11.390344 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:07:11.403117 systemd-udevd[463]: Using default interface naming scheme 'v255'. Aug 5 22:07:11.406353 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:07:11.409901 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:07:11.424962 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Aug 5 22:07:11.459187 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:07:11.471207 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:07:11.516109 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:07:11.526317 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:07:11.539158 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:07:11.540566 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:07:11.542951 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:07:11.545146 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:07:11.552204 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:07:11.560579 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:07:11.566097 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 5 22:07:11.574668 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 5 22:07:11.574782 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:07:11.574795 kernel: GPT:9289727 != 19775487 Aug 5 22:07:11.574804 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:07:11.574814 kernel: GPT:9289727 != 19775487 Aug 5 22:07:11.574824 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:07:11.574836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:07:11.576579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:07:11.576975 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:07:11.580329 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:07:11.581584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:07:11.581749 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:07:11.584040 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:07:11.590529 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:07:11.598108 kernel: BTRFS: device fsid 278882ec-4175-45f0-a12b-7fddc0d6d9a3 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (526) Aug 5 22:07:11.602090 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Aug 5 22:07:11.603500 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 5 22:07:11.604982 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:07:11.614538 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 5 22:07:11.621501 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:07:11.625387 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 5 22:07:11.626604 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 5 22:07:11.641245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:07:11.643028 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:07:11.648427 disk-uuid[556]: Primary Header is updated. Aug 5 22:07:11.648427 disk-uuid[556]: Secondary Entries is updated. Aug 5 22:07:11.648427 disk-uuid[556]: Secondary Header is updated. Aug 5 22:07:11.654864 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:07:11.661918 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:07:12.663097 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 5 22:07:12.665149 disk-uuid[558]: The operation has completed successfully. Aug 5 22:07:12.686467 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:07:12.686580 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:07:12.707218 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:07:12.709966 sh[579]: Success Aug 5 22:07:12.721089 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 22:07:12.761543 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:07:12.763230 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:07:12.764103 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:07:12.774812 kernel: BTRFS info (device dm-0): first mount of filesystem 278882ec-4175-45f0-a12b-7fddc0d6d9a3 Aug 5 22:07:12.774845 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:07:12.774862 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:07:12.776607 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:07:12.776635 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:07:12.780205 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:07:12.781507 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:07:12.792200 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:07:12.793688 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:07:12.801691 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:07:12.801727 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:07:12.801739 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:07:12.804291 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:07:12.812080 kernel: BTRFS info (device vda6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:07:12.812053 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:07:12.817963 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:07:12.826198 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:07:12.889722 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:07:12.902225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:07:12.918254 ignition[674]: Ignition 2.18.0 Aug 5 22:07:12.918263 ignition[674]: Stage: fetch-offline Aug 5 22:07:12.918296 ignition[674]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:07:12.918304 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:07:12.918393 ignition[674]: parsed url from cmdline: "" Aug 5 22:07:12.918396 ignition[674]: no config URL provided Aug 5 22:07:12.918401 ignition[674]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:07:12.918408 ignition[674]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:07:12.918431 ignition[674]: op(1): [started] loading QEMU firmware config module Aug 5 22:07:12.918435 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 5 22:07:12.929284 ignition[674]: op(1): [finished] loading QEMU firmware config module Aug 5 22:07:12.931507 systemd-networkd[770]: lo: Link UP Aug 5 22:07:12.931517 systemd-networkd[770]: lo: Gained carrier Aug 5 22:07:12.932210 systemd-networkd[770]: Enumeration completed Aug 5 22:07:12.932585 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:07:12.934255 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:07:12.934258 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:07:12.936867 systemd-networkd[770]: eth0: Link UP Aug 5 22:07:12.936870 systemd-networkd[770]: eth0: Gained carrier Aug 5 22:07:12.936877 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:07:12.937327 systemd[1]: Reached target network.target - Network. Aug 5 22:07:12.959105 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:07:12.979480 ignition[674]: parsing config with SHA512: 1d57fd8dd548abb0ed6319d0b91a81d6ef916b9f4602407f776794e990f23f4ad2a1b7274f6ee723882a4d1ed5198906efb5e6b0012cf517fb0722b7a873a14a Aug 5 22:07:12.983997 unknown[674]: fetched base config from "system" Aug 5 22:07:12.984006 unknown[674]: fetched user config from "qemu" Aug 5 22:07:12.984534 ignition[674]: fetch-offline: fetch-offline passed Aug 5 22:07:12.985982 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:07:12.984593 ignition[674]: Ignition finished successfully Aug 5 22:07:12.987557 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 5 22:07:12.991206 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:07:13.003002 ignition[777]: Ignition 2.18.0 Aug 5 22:07:13.003020 ignition[777]: Stage: kargs Aug 5 22:07:13.003187 ignition[777]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:07:13.003197 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:07:13.006180 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:07:13.004024 ignition[777]: kargs: kargs passed Aug 5 22:07:13.004112 ignition[777]: Ignition finished successfully Aug 5 22:07:13.015385 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:07:13.026551 ignition[786]: Ignition 2.18.0 Aug 5 22:07:13.026562 ignition[786]: Stage: disks Aug 5 22:07:13.026764 ignition[786]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:07:13.026774 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:07:13.028019 ignition[786]: disks: disks passed Aug 5 22:07:13.030666 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:07:13.028079 ignition[786]: Ignition finished successfully Aug 5 22:07:13.031885 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:07:13.033297 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:07:13.035151 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:07:13.036678 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:07:13.038426 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:07:13.054224 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:07:13.066952 systemd-fsck[797]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:07:13.075132 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:07:13.083308 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:07:13.128085 kernel: EXT4-fs (vda9): mounted filesystem 44c9fced-dca5-4347-a15f-96911c2e5e61 r/w with ordered data mode. Quota mode: none. Aug 5 22:07:13.128905 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:07:13.130168 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:07:13.139305 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:07:13.141100 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:07:13.142389 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:07:13.142427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:07:13.150884 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (805) Aug 5 22:07:13.142448 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:07:13.156317 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:07:13.156346 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:07:13.156357 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:07:13.156369 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:07:13.147998 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:07:13.149967 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:07:13.157977 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:07:13.209761 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:07:13.214558 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:07:13.218126 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:07:13.223143 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:07:13.298724 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:07:13.306213 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:07:13.308589 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:07:13.314088 kernel: BTRFS info (device vda6): last unmount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:07:13.331929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:07:13.333796 ignition[918]: INFO : Ignition 2.18.0 Aug 5 22:07:13.333796 ignition[918]: INFO : Stage: mount Aug 5 22:07:13.333796 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:07:13.333796 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:07:13.333796 ignition[918]: INFO : mount: mount passed Aug 5 22:07:13.333796 ignition[918]: INFO : Ignition finished successfully Aug 5 22:07:13.335056 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:07:13.343169 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:07:13.773658 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:07:13.788252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:07:13.794078 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (934) Aug 5 22:07:13.794484 kernel: BTRFS info (device vda6): first mount of filesystem 47327e03-a391-4166-b35e-18ba93a1f298 Aug 5 22:07:13.796379 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 22:07:13.796401 kernel: BTRFS info (device vda6): using free space tree Aug 5 22:07:13.799075 kernel: BTRFS info (device vda6): auto enabling async discard Aug 5 22:07:13.800388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:07:13.816201 ignition[951]: INFO : Ignition 2.18.0 Aug 5 22:07:13.816201 ignition[951]: INFO : Stage: files Aug 5 22:07:13.817789 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:07:13.817789 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:07:13.817789 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:07:13.821098 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:07:13.821098 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:07:13.821098 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:07:13.821098 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:07:13.821098 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:07:13.820305 unknown[951]: wrote ssh authorized keys file for user: core Aug 5 22:07:13.828336 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 22:07:13.828336 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 22:07:13.828336 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:07:13.828336 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 22:07:14.095971 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 22:07:14.150129 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 22:07:14.150129 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:07:14.150129 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:07:14.150129 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:07:14.150129 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:07:14.150129 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:07:14.160127 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Aug 5 22:07:14.449650 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 5 22:07:14.577074 systemd-networkd[770]: eth0: Gained IPv6LL Aug 5 22:07:14.693456 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 22:07:14.693456 ignition[951]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 5 22:07:14.696736 ignition[951]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Aug 5 22:07:14.737742 ignition[951]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:07:14.742046 ignition[951]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 5 22:07:14.744486 ignition[951]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Aug 5 22:07:14.744486 ignition[951]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:07:14.744486 ignition[951]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:07:14.744486 ignition[951]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:07:14.744486 ignition[951]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:07:14.744486 ignition[951]: INFO : files: files passed Aug 5 22:07:14.744486 ignition[951]: INFO : Ignition finished successfully Aug 5 22:07:14.744864 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:07:14.762314 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:07:14.765744 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:07:14.767400 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:07:14.767483 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:07:14.775365 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Aug 5 22:07:14.778323 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:07:14.778323 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:07:14.781755 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:07:14.782215 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:07:14.784457 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:07:14.793227 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:07:14.814692 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:07:14.814818 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:07:14.817042 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:07:14.818948 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:07:14.820767 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:07:14.821544 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:07:14.836576 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:07:14.848217 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:07:14.857899 systemd[1]: Stopped target network.target - Network. Aug 5 22:07:14.858873 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:07:14.860613 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:07:14.862652 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:07:14.864311 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:07:14.864426 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:07:14.866746 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:07:14.868652 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:07:14.870126 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:07:14.871696 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:07:14.873537 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:07:14.875355 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:07:14.877193 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:07:14.879197 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:07:14.880986 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:07:14.882663 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:07:14.884165 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:07:14.884284 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:07:14.886493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:07:14.888267 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:07:14.890030 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:07:14.891146 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:07:14.892880 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:07:14.892989 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:07:14.895766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:07:14.895914 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:07:14.898083 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:07:14.899520 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:07:14.899640 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:07:14.901538 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:07:14.902995 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:07:14.904814 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:07:14.904932 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:07:14.907074 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:07:14.907190 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:07:14.908785 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:07:14.908936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:07:14.910673 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:07:14.910826 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:07:14.924275 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:07:14.930288 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:07:14.931359 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:07:14.933110 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:07:14.934654 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:07:14.939433 ignition[1006]: INFO : Ignition 2.18.0 Aug 5 22:07:14.939433 ignition[1006]: INFO : Stage: umount Aug 5 22:07:14.939433 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:07:14.939433 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 5 22:07:14.936159 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:07:14.944945 ignition[1006]: INFO : umount: umount passed Aug 5 22:07:14.944945 ignition[1006]: INFO : Ignition finished successfully Aug 5 22:07:14.938776 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:07:14.938917 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:07:14.940188 systemd-networkd[770]: eth0: DHCPv6 lease lost Aug 5 22:07:14.944931 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:07:14.945017 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:07:14.949454 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:07:14.951474 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:07:14.958693 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:07:14.959231 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:07:14.959346 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:07:14.962289 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:07:14.962428 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:07:14.964223 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:07:14.964276 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:07:14.965813 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:07:14.965864 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:07:14.967431 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:07:14.967474 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:07:14.969017 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:07:14.969086 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:07:14.975178 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:07:14.979011 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:07:14.979096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:07:14.981043 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:07:14.981115 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:07:14.982742 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:07:14.982792 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:07:14.984715 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:07:14.984769 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:07:14.986672 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:07:14.989307 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:07:14.989394 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:07:14.993554 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:07:14.993634 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:07:15.000858 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:07:15.000916 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:07:15.003453 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:07:15.003530 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:07:15.012964 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:07:15.013135 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:07:15.015164 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:07:15.015202 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:07:15.016743 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:07:15.016775 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:07:15.018544 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:07:15.018590 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:07:15.020998 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:07:15.021050 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:07:15.023734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:07:15.023778 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:07:15.040237 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:07:15.041264 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:07:15.041320 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:07:15.043293 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:07:15.043338 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:07:15.045160 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:07:15.045205 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:07:15.047223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:07:15.047268 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:07:15.049450 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:07:15.049550 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:07:15.051688 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:07:15.053749 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:07:15.063638 systemd[1]: Switching root. Aug 5 22:07:15.091821 systemd-journald[237]: Journal stopped Aug 5 22:07:15.807028 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Aug 5 22:07:15.807101 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:07:15.807116 kernel: SELinux: policy capability open_perms=1 Aug 5 22:07:15.807128 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:07:15.807140 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:07:15.807150 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:07:15.807160 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:07:15.807170 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:07:15.807180 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:07:15.807190 kernel: audit: type=1403 audit(1722895635.264:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:07:15.807205 systemd[1]: Successfully loaded SELinux policy in 31.906ms. Aug 5 22:07:15.807223 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.209ms. Aug 5 22:07:15.807237 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:07:15.807248 systemd[1]: Detected virtualization kvm. Aug 5 22:07:15.807259 systemd[1]: Detected architecture arm64. Aug 5 22:07:15.807270 systemd[1]: Detected first boot. Aug 5 22:07:15.807281 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:07:15.807291 zram_generator::config[1067]: No configuration found. Aug 5 22:07:15.807303 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:07:15.807314 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:07:15.807325 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 5 22:07:15.807339 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:07:15.807350 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:07:15.807360 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:07:15.807373 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:07:15.807384 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:07:15.807395 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:07:15.807406 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:07:15.807417 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:07:15.807430 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:07:15.807441 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:07:15.807452 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:07:15.807462 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:07:15.807473 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:07:15.807484 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:07:15.807495 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 22:07:15.807506 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:07:15.807518 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:07:15.807530 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:07:15.807541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:07:15.807552 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:07:15.807567 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:07:15.807582 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:07:15.807593 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:07:15.807604 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:07:15.807617 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:07:15.807629 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:07:15.807645 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:07:15.807659 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:07:15.807671 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:07:15.807683 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:07:15.807694 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:07:15.807705 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:07:15.807716 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:07:15.807727 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:07:15.807740 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:07:15.807752 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:07:15.807763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:07:15.807774 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:07:15.807785 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:07:15.807796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:07:15.807807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:07:15.807818 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:07:15.807830 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:07:15.807843 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:07:15.807854 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:07:15.807866 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 5 22:07:15.807878 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 5 22:07:15.807889 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:07:15.807901 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:07:15.807912 kernel: fuse: init (API version 7.39) Aug 5 22:07:15.807923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:07:15.807935 kernel: ACPI: bus type drm_connector registered Aug 5 22:07:15.807945 kernel: loop: module loaded Aug 5 22:07:15.807973 systemd-journald[1148]: Collecting audit messages is disabled. Aug 5 22:07:15.808004 systemd-journald[1148]: Journal started Aug 5 22:07:15.808029 systemd-journald[1148]: Runtime Journal (/run/log/journal/11587c243120416494a95b4da1bd9e68) is 5.9M, max 47.3M, 41.4M free. Aug 5 22:07:15.811464 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:07:15.815190 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:07:15.819453 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:07:15.820574 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:07:15.821859 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:07:15.823214 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:07:15.824553 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:07:15.825938 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:07:15.827379 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:07:15.828828 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:07:15.830398 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:07:15.831959 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:07:15.832169 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:07:15.833599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:07:15.833771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:07:15.835311 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:07:15.835478 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:07:15.836796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:07:15.836963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:07:15.838760 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:07:15.838927 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:07:15.840560 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:07:15.840787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:07:15.842314 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:07:15.843879 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:07:15.846172 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:07:15.860333 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:07:15.875183 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:07:15.877462 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:07:15.878540 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:07:15.881225 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:07:15.883711 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:07:15.885315 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:07:15.887235 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:07:15.888499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:07:15.891697 systemd-journald[1148]: Time spent on flushing to /var/log/journal/11587c243120416494a95b4da1bd9e68 is 20.279ms for 842 entries. Aug 5 22:07:15.891697 systemd-journald[1148]: System Journal (/var/log/journal/11587c243120416494a95b4da1bd9e68) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:07:15.929622 systemd-journald[1148]: Received client request to flush runtime journal. Aug 5 22:07:15.892216 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:07:15.896098 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:07:15.899859 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:07:15.901821 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:07:15.903474 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:07:15.905048 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:07:15.908342 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:07:15.911259 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:07:15.920613 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:07:15.926289 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:07:15.931343 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:07:15.932022 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Aug 5 22:07:15.932036 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Aug 5 22:07:15.936539 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:07:15.945210 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:07:15.962856 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:07:15.978254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:07:15.989778 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Aug 5 22:07:15.989799 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Aug 5 22:07:15.993549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:07:16.310025 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:07:16.322221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:07:16.341270 systemd-udevd[1234]: Using default interface naming scheme 'v255'. Aug 5 22:07:16.353158 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:07:16.369237 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:07:16.388338 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:07:16.390102 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1243) Aug 5 22:07:16.394100 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1249) Aug 5 22:07:16.394369 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Aug 5 22:07:16.429680 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 5 22:07:16.435049 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:07:16.521337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:07:16.525576 systemd-networkd[1244]: lo: Link UP Aug 5 22:07:16.525585 systemd-networkd[1244]: lo: Gained carrier Aug 5 22:07:16.526328 systemd-networkd[1244]: Enumeration completed Aug 5 22:07:16.526454 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:07:16.526763 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:07:16.526772 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:07:16.527795 systemd-networkd[1244]: eth0: Link UP Aug 5 22:07:16.527805 systemd-networkd[1244]: eth0: Gained carrier Aug 5 22:07:16.527819 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:07:16.528956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:07:16.530556 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:07:16.534925 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:07:16.550116 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.66/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 5 22:07:16.565567 lvm[1274]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:07:16.584417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:07:16.601630 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:07:16.603175 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:07:16.615341 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:07:16.619077 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:07:16.651763 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:07:16.653253 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:07:16.654470 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:07:16.654502 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:07:16.655448 systemd[1]: Reached target machines.target - Containers. Aug 5 22:07:16.657358 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:07:16.668201 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:07:16.670547 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:07:16.671711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:07:16.672628 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:07:16.675459 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:07:16.681756 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:07:16.685556 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:07:16.690889 kernel: loop0: detected capacity change from 0 to 113672 Aug 5 22:07:16.691023 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:07:16.693009 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:07:16.704866 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:07:16.705642 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:07:16.709091 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:07:16.738112 kernel: loop1: detected capacity change from 0 to 59688 Aug 5 22:07:16.790098 kernel: loop2: detected capacity change from 0 to 193208 Aug 5 22:07:16.836088 kernel: loop3: detected capacity change from 0 to 113672 Aug 5 22:07:16.841084 kernel: loop4: detected capacity change from 0 to 59688 Aug 5 22:07:16.847088 kernel: loop5: detected capacity change from 0 to 193208 Aug 5 22:07:16.852089 (sd-merge)[1303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 5 22:07:16.852511 (sd-merge)[1303]: Merged extensions into '/usr'. Aug 5 22:07:16.856057 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:07:16.856081 systemd[1]: Reloading... Aug 5 22:07:16.893086 zram_generator::config[1329]: No configuration found. Aug 5 22:07:16.939103 ldconfig[1287]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:07:16.996740 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:07:17.041792 systemd[1]: Reloading finished in 185 ms. Aug 5 22:07:17.053875 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:07:17.055379 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:07:17.066222 systemd[1]: Starting ensure-sysext.service... Aug 5 22:07:17.068098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:07:17.073239 systemd[1]: Reloading requested from client PID 1370 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:07:17.073257 systemd[1]: Reloading... Aug 5 22:07:17.084766 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:07:17.085034 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:07:17.085731 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:07:17.085952 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Aug 5 22:07:17.086011 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Aug 5 22:07:17.088140 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:07:17.088154 systemd-tmpfiles[1377]: Skipping /boot Aug 5 22:07:17.094741 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:07:17.094758 systemd-tmpfiles[1377]: Skipping /boot Aug 5 22:07:17.117212 zram_generator::config[1401]: No configuration found. Aug 5 22:07:17.208633 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:07:17.254502 systemd[1]: Reloading finished in 180 ms. Aug 5 22:07:17.271273 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:07:17.294637 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:07:17.297436 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:07:17.300380 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:07:17.307246 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:07:17.310635 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:07:17.316457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:07:17.321820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:07:17.324458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:07:17.327324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:07:17.330255 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:07:17.334948 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:07:17.335812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:07:17.337615 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:07:17.337768 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:07:17.339621 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:07:17.347238 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:07:17.347451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:07:17.354521 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:07:17.356937 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:07:17.363010 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:07:17.368327 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:07:17.373367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:07:17.376311 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:07:17.381092 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:07:17.382307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:07:17.386326 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:07:17.387510 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:07:17.388514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:07:17.388681 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:07:17.393099 augenrules[1490]: No rules Aug 5 22:07:17.390336 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:07:17.390490 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:07:17.393570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:07:17.393728 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:07:17.396128 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:07:17.397910 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:07:17.398246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:07:17.399809 systemd-resolved[1451]: Positive Trust Anchors: Aug 5 22:07:17.399835 systemd-resolved[1451]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:07:17.399868 systemd-resolved[1451]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:07:17.401816 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:07:17.403576 systemd[1]: Finished ensure-sysext.service. Aug 5 22:07:17.408616 systemd-resolved[1451]: Defaulting to hostname 'linux'. Aug 5 22:07:17.409837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:07:17.409913 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:07:17.425286 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 5 22:07:17.426477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:07:17.427801 systemd[1]: Reached target network.target - Network. Aug 5 22:07:17.428774 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:07:17.474900 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 5 22:07:17.476211 systemd-timesyncd[1508]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 5 22:07:17.476268 systemd-timesyncd[1508]: Initial clock synchronization to Mon 2024-08-05 22:07:17.554890 UTC. Aug 5 22:07:17.476569 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:07:17.477797 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:07:17.479097 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:07:17.480370 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:07:17.481628 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:07:17.481671 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:07:17.482702 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:07:17.483929 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:07:17.485185 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:07:17.486406 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:07:17.488131 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:07:17.490742 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:07:17.493052 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:07:17.498147 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:07:17.499207 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:07:17.500225 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:07:17.501370 systemd[1]: System is tainted: cgroupsv1 Aug 5 22:07:17.501424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:07:17.501444 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:07:17.502720 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:07:17.505019 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:07:17.507021 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:07:17.512249 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:07:17.513207 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:07:17.514342 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:07:17.518206 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:07:17.521655 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:07:17.532186 jq[1514]: false Aug 5 22:07:17.534353 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:07:17.538252 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:07:17.539878 extend-filesystems[1516]: Found loop3 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found loop4 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found loop5 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda1 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda2 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda3 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found usr Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda4 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda6 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda7 Aug 5 22:07:17.539878 extend-filesystems[1516]: Found vda9 Aug 5 22:07:17.539878 extend-filesystems[1516]: Checking size of /dev/vda9 Aug 5 22:07:17.549567 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:07:17.553212 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:07:17.558150 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:07:17.565170 extend-filesystems[1516]: Resized partition /dev/vda9 Aug 5 22:07:17.566736 dbus-daemon[1513]: [system] SELinux support is enabled Aug 5 22:07:17.577601 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 5 22:07:17.571164 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:07:17.577720 extend-filesystems[1540]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:07:17.597252 jq[1537]: true Aug 5 22:07:17.580395 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:07:17.580845 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:07:17.581494 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:07:17.581756 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:07:17.584350 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:07:17.585223 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:07:17.610169 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1251) Aug 5 22:07:17.610241 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 5 22:07:17.625730 tar[1545]: linux-arm64/helm Aug 5 22:07:17.615398 (ntainerd)[1548]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:07:17.626360 update_engine[1534]: I0805 22:07:17.619791 1534 main.cc:92] Flatcar Update Engine starting Aug 5 22:07:17.626510 jq[1546]: true Aug 5 22:07:17.626662 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 5 22:07:17.626662 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:07:17.626662 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 5 22:07:17.623333 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:07:17.648801 extend-filesystems[1516]: Resized filesystem in /dev/vda9 Aug 5 22:07:17.653331 update_engine[1534]: I0805 22:07:17.637020 1534 update_check_scheduler.cc:74] Next update check in 8m19s Aug 5 22:07:17.623359 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:07:17.626900 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:07:17.626923 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:07:17.628511 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:07:17.628755 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:07:17.634238 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:07:17.636509 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:07:17.646312 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:07:17.686979 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 22:07:17.687928 systemd-logind[1525]: New seat seat0. Aug 5 22:07:17.695611 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:07:17.697421 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:07:17.699038 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:07:17.699402 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:07:17.733299 locksmithd[1568]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:07:17.917853 containerd[1548]: time="2024-08-05T22:07:17.917705040Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:07:17.950153 containerd[1548]: time="2024-08-05T22:07:17.949745600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:07:17.950153 containerd[1548]: time="2024-08-05T22:07:17.949797600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:07:17.951614 containerd[1548]: time="2024-08-05T22:07:17.951575720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:07:17.951693 containerd[1548]: time="2024-08-05T22:07:17.951680040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:07:17.952054 containerd[1548]: time="2024-08-05T22:07:17.952027120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:07:17.952140 containerd[1548]: time="2024-08-05T22:07:17.952126440Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:07:17.952264 containerd[1548]: time="2024-08-05T22:07:17.952246000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:07:17.952383 containerd[1548]: time="2024-08-05T22:07:17.952364160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:07:17.952436 containerd[1548]: time="2024-08-05T22:07:17.952423480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:07:17.952551 containerd[1548]: time="2024-08-05T22:07:17.952533520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:07:17.952796 containerd[1548]: time="2024-08-05T22:07:17.952775320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:07:17.952863 containerd[1548]: time="2024-08-05T22:07:17.952848000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:07:17.953215 containerd[1548]: time="2024-08-05T22:07:17.952911840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:07:17.953215 containerd[1548]: time="2024-08-05T22:07:17.953091120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:07:17.953215 containerd[1548]: time="2024-08-05T22:07:17.953107360Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:07:17.953215 containerd[1548]: time="2024-08-05T22:07:17.953168880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:07:17.953215 containerd[1548]: time="2024-08-05T22:07:17.953181720Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:07:17.956605 containerd[1548]: time="2024-08-05T22:07:17.956579080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:07:17.956687 containerd[1548]: time="2024-08-05T22:07:17.956674320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:07:17.956740 containerd[1548]: time="2024-08-05T22:07:17.956728560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:07:17.956812 containerd[1548]: time="2024-08-05T22:07:17.956798760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:07:17.956878 containerd[1548]: time="2024-08-05T22:07:17.956866680Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:07:17.956925 containerd[1548]: time="2024-08-05T22:07:17.956914760Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:07:17.957000 containerd[1548]: time="2024-08-05T22:07:17.956979080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:07:17.957178 containerd[1548]: time="2024-08-05T22:07:17.957159440Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:07:17.957245 containerd[1548]: time="2024-08-05T22:07:17.957231640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957286080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957305880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957333720Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957351880Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957364760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957377040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957390720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957403680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957439520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957451400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:07:17.957647 containerd[1548]: time="2024-08-05T22:07:17.957554880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:07:17.958227 containerd[1548]: time="2024-08-05T22:07:17.958197400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:07:17.958279 containerd[1548]: time="2024-08-05T22:07:17.958243080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958279 containerd[1548]: time="2024-08-05T22:07:17.958260200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:07:17.958327 containerd[1548]: time="2024-08-05T22:07:17.958284040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:07:17.958532 containerd[1548]: time="2024-08-05T22:07:17.958518960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958557 containerd[1548]: time="2024-08-05T22:07:17.958536720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958557 containerd[1548]: time="2024-08-05T22:07:17.958551600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958601 containerd[1548]: time="2024-08-05T22:07:17.958563600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958601 containerd[1548]: time="2024-08-05T22:07:17.958576520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958601 containerd[1548]: time="2024-08-05T22:07:17.958588840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958662 containerd[1548]: time="2024-08-05T22:07:17.958600800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958662 containerd[1548]: time="2024-08-05T22:07:17.958613040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958662 containerd[1548]: time="2024-08-05T22:07:17.958635040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:07:17.958789 containerd[1548]: time="2024-08-05T22:07:17.958771080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958817 containerd[1548]: time="2024-08-05T22:07:17.958795760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958817 containerd[1548]: time="2024-08-05T22:07:17.958810760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958860 containerd[1548]: time="2024-08-05T22:07:17.958823040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958860 containerd[1548]: time="2024-08-05T22:07:17.958837760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958860 containerd[1548]: time="2024-08-05T22:07:17.958852320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958909 containerd[1548]: time="2024-08-05T22:07:17.958865320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.958909 containerd[1548]: time="2024-08-05T22:07:17.958877000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:07:17.959260 containerd[1548]: time="2024-08-05T22:07:17.959205800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:07:17.959363 containerd[1548]: time="2024-08-05T22:07:17.959268320Z" level=info msg="Connect containerd service" Aug 5 22:07:17.959363 containerd[1548]: time="2024-08-05T22:07:17.959297960Z" level=info msg="using legacy CRI server" Aug 5 22:07:17.959363 containerd[1548]: time="2024-08-05T22:07:17.959304520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:07:17.959477 containerd[1548]: time="2024-08-05T22:07:17.959461920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:07:17.960058 containerd[1548]: time="2024-08-05T22:07:17.960031200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:07:17.960120 containerd[1548]: time="2024-08-05T22:07:17.960099640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:07:17.960146 containerd[1548]: time="2024-08-05T22:07:17.960122000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:07:17.960146 containerd[1548]: time="2024-08-05T22:07:17.960133720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:07:17.960200 containerd[1548]: time="2024-08-05T22:07:17.960145760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:07:17.960622 containerd[1548]: time="2024-08-05T22:07:17.960394720Z" level=info msg="Start subscribing containerd event" Aug 5 22:07:17.960622 containerd[1548]: time="2024-08-05T22:07:17.960574160Z" level=info msg="Start recovering state" Aug 5 22:07:17.960740 containerd[1548]: time="2024-08-05T22:07:17.960725560Z" level=info msg="Start event monitor" Aug 5 22:07:17.960930 containerd[1548]: time="2024-08-05T22:07:17.960808800Z" level=info msg="Start snapshots syncer" Aug 5 22:07:17.960930 containerd[1548]: time="2024-08-05T22:07:17.960824760Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:07:17.960930 containerd[1548]: time="2024-08-05T22:07:17.960832320Z" level=info msg="Start streaming server" Aug 5 22:07:17.960989 containerd[1548]: time="2024-08-05T22:07:17.960911680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:07:17.962750 containerd[1548]: time="2024-08-05T22:07:17.961037120Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:07:17.964307 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:07:17.966018 containerd[1548]: time="2024-08-05T22:07:17.965689760Z" level=info msg="containerd successfully booted in 0.050188s" Aug 5 22:07:18.002405 tar[1545]: linux-arm64/LICENSE Aug 5 22:07:18.002489 tar[1545]: linux-arm64/README.md Aug 5 22:07:18.011693 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:07:18.109270 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:07:18.127934 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:07:18.139384 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:07:18.144647 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:07:18.144851 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:07:18.147416 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:07:18.159399 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:07:18.168361 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:07:18.170316 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 22:07:18.171603 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:07:18.475848 systemd-networkd[1244]: eth0: Gained IPv6LL Aug 5 22:07:18.478398 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:07:18.480216 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:07:18.491303 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 5 22:07:18.493883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:07:18.496066 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:07:18.513263 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 5 22:07:18.513504 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 5 22:07:18.515274 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:07:18.516552 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:07:18.977979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:18.979422 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:07:18.982847 (kubelet)[1657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:07:18.985537 systemd[1]: Startup finished in 5.105s (kernel) + 3.755s (userspace) = 8.861s. Aug 5 22:07:19.471840 kubelet[1657]: E0805 22:07:19.471642 1657 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:07:19.474861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:07:19.475048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:07:23.607587 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:07:23.622340 systemd[1]: Started sshd@0-10.0.0.66:22-10.0.0.1:47764.service - OpenSSH per-connection server daemon (10.0.0.1:47764). Aug 5 22:07:23.677486 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 47764 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:23.679264 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:23.690609 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:07:23.707383 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:07:23.708983 systemd-logind[1525]: New session 1 of user core. Aug 5 22:07:23.717567 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:07:23.719940 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:07:23.726779 (systemd)[1677]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:23.811605 systemd[1677]: Queued start job for default target default.target. Aug 5 22:07:23.812003 systemd[1677]: Created slice app.slice - User Application Slice. Aug 5 22:07:23.812040 systemd[1677]: Reached target paths.target - Paths. Aug 5 22:07:23.812051 systemd[1677]: Reached target timers.target - Timers. Aug 5 22:07:23.823189 systemd[1677]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:07:23.829740 systemd[1677]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:07:23.829803 systemd[1677]: Reached target sockets.target - Sockets. Aug 5 22:07:23.829816 systemd[1677]: Reached target basic.target - Basic System. Aug 5 22:07:23.829858 systemd[1677]: Reached target default.target - Main User Target. Aug 5 22:07:23.829884 systemd[1677]: Startup finished in 97ms. Aug 5 22:07:23.830290 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:07:23.832211 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:07:23.892652 systemd[1]: Started sshd@1-10.0.0.66:22-10.0.0.1:47780.service - OpenSSH per-connection server daemon (10.0.0.1:47780). Aug 5 22:07:23.928456 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 47780 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:23.929866 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:23.934146 systemd-logind[1525]: New session 2 of user core. Aug 5 22:07:23.947352 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:07:24.000385 sshd[1689]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:24.008374 systemd[1]: Started sshd@2-10.0.0.66:22-10.0.0.1:47790.service - OpenSSH per-connection server daemon (10.0.0.1:47790). Aug 5 22:07:24.008759 systemd[1]: sshd@1-10.0.0.66:22-10.0.0.1:47780.service: Deactivated successfully. Aug 5 22:07:24.010824 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:07:24.011454 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:07:24.012455 systemd-logind[1525]: Removed session 2. Aug 5 22:07:24.041657 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 47790 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:24.042886 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:24.047082 systemd-logind[1525]: New session 3 of user core. Aug 5 22:07:24.056348 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:07:24.105276 sshd[1694]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:24.121402 systemd[1]: Started sshd@3-10.0.0.66:22-10.0.0.1:47796.service - OpenSSH per-connection server daemon (10.0.0.1:47796). Aug 5 22:07:24.121798 systemd[1]: sshd@2-10.0.0.66:22-10.0.0.1:47790.service: Deactivated successfully. Aug 5 22:07:24.124130 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:07:24.124293 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:07:24.126180 systemd-logind[1525]: Removed session 3. Aug 5 22:07:24.155594 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 47796 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:24.157336 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:24.161682 systemd-logind[1525]: New session 4 of user core. Aug 5 22:07:24.174356 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:07:24.226429 sshd[1702]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:24.235375 systemd[1]: Started sshd@4-10.0.0.66:22-10.0.0.1:47804.service - OpenSSH per-connection server daemon (10.0.0.1:47804). Aug 5 22:07:24.235780 systemd[1]: sshd@3-10.0.0.66:22-10.0.0.1:47796.service: Deactivated successfully. Aug 5 22:07:24.237627 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:07:24.238290 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:07:24.240045 systemd-logind[1525]: Removed session 4. Aug 5 22:07:24.268704 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 47804 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:24.269943 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:24.274131 systemd-logind[1525]: New session 5 of user core. Aug 5 22:07:24.284405 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:07:24.352129 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:07:24.352392 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:07:24.372053 sudo[1717]: pam_unix(sudo:session): session closed for user root Aug 5 22:07:24.373958 sshd[1710]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:24.385388 systemd[1]: Started sshd@5-10.0.0.66:22-10.0.0.1:47814.service - OpenSSH per-connection server daemon (10.0.0.1:47814). Aug 5 22:07:24.385789 systemd[1]: sshd@4-10.0.0.66:22-10.0.0.1:47804.service: Deactivated successfully. Aug 5 22:07:24.387721 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:07:24.388343 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:07:24.390164 systemd-logind[1525]: Removed session 5. Aug 5 22:07:24.422390 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 47814 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:24.423694 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:24.428154 systemd-logind[1525]: New session 6 of user core. Aug 5 22:07:24.439437 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:07:24.490477 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:07:24.490721 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:07:24.494439 sudo[1727]: pam_unix(sudo:session): session closed for user root Aug 5 22:07:24.499610 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:07:24.499866 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:07:24.523331 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:07:24.524900 auditctl[1730]: No rules Aug 5 22:07:24.525369 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:07:24.525621 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:07:24.528134 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:07:24.553028 augenrules[1749]: No rules Aug 5 22:07:24.554378 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:07:24.555421 sudo[1726]: pam_unix(sudo:session): session closed for user root Aug 5 22:07:24.560205 sshd[1719]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:24.568358 systemd[1]: Started sshd@6-10.0.0.66:22-10.0.0.1:47824.service - OpenSSH per-connection server daemon (10.0.0.1:47824). Aug 5 22:07:24.568838 systemd[1]: sshd@5-10.0.0.66:22-10.0.0.1:47814.service: Deactivated successfully. Aug 5 22:07:24.570779 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:07:24.571345 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:07:24.572601 systemd-logind[1525]: Removed session 6. Aug 5 22:07:24.602106 sshd[1755]: Accepted publickey for core from 10.0.0.1 port 47824 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:07:24.603138 sshd[1755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:07:24.607132 systemd-logind[1525]: New session 7 of user core. Aug 5 22:07:24.619346 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:07:24.670944 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:07:24.671223 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:07:24.771399 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:07:24.771606 (dockerd)[1773]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:07:25.014144 dockerd[1773]: time="2024-08-05T22:07:25.014051118Z" level=info msg="Starting up" Aug 5 22:07:25.194594 dockerd[1773]: time="2024-08-05T22:07:25.194477235Z" level=info msg="Loading containers: start." Aug 5 22:07:25.300102 kernel: Initializing XFRM netlink socket Aug 5 22:07:25.371038 systemd-networkd[1244]: docker0: Link UP Aug 5 22:07:25.409696 dockerd[1773]: time="2024-08-05T22:07:25.409655047Z" level=info msg="Loading containers: done." Aug 5 22:07:25.469518 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1295398674-merged.mount: Deactivated successfully. Aug 5 22:07:25.489867 dockerd[1773]: time="2024-08-05T22:07:25.489792678Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:07:25.490109 dockerd[1773]: time="2024-08-05T22:07:25.490016219Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:07:25.490245 dockerd[1773]: time="2024-08-05T22:07:25.490215868Z" level=info msg="Daemon has completed initialization" Aug 5 22:07:25.528020 dockerd[1773]: time="2024-08-05T22:07:25.527796018Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:07:25.528019 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:07:26.138411 containerd[1548]: time="2024-08-05T22:07:26.138316981Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 22:07:26.783895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820139.mount: Deactivated successfully. Aug 5 22:07:28.569669 containerd[1548]: time="2024-08-05T22:07:28.569620186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:28.570683 containerd[1548]: time="2024-08-05T22:07:28.570593501Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=31601518" Aug 5 22:07:28.571797 containerd[1548]: time="2024-08-05T22:07:28.571733086Z" level=info msg="ImageCreate event name:\"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:28.574778 containerd[1548]: time="2024-08-05T22:07:28.574737728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:28.576089 containerd[1548]: time="2024-08-05T22:07:28.576043823Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"31598316\" in 2.43768547s" Aug 5 22:07:28.576159 containerd[1548]: time="2024-08-05T22:07:28.576098564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\"" Aug 5 22:07:28.597599 containerd[1548]: time="2024-08-05T22:07:28.597347424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 22:07:29.725331 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:07:29.735392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:07:29.834390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:29.839495 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:07:29.890575 kubelet[1990]: E0805 22:07:29.890462 1990 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:07:29.896633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:07:29.896811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:07:30.341077 containerd[1548]: time="2024-08-05T22:07:30.341000251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:30.342156 containerd[1548]: time="2024-08-05T22:07:30.341884681Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=29018272" Aug 5 22:07:30.342933 containerd[1548]: time="2024-08-05T22:07:30.342870192Z" level=info msg="ImageCreate event name:\"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:30.345812 containerd[1548]: time="2024-08-05T22:07:30.345773218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:30.347030 containerd[1548]: time="2024-08-05T22:07:30.346987542Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"30505537\" in 1.749597936s" Aug 5 22:07:30.347030 containerd[1548]: time="2024-08-05T22:07:30.347027100Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\"" Aug 5 22:07:30.367532 containerd[1548]: time="2024-08-05T22:07:30.367476375Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 22:07:31.397623 containerd[1548]: time="2024-08-05T22:07:31.397571887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:31.398658 containerd[1548]: time="2024-08-05T22:07:31.398464674Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=15534522" Aug 5 22:07:31.399312 containerd[1548]: time="2024-08-05T22:07:31.399279405Z" level=info msg="ImageCreate event name:\"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:31.402125 containerd[1548]: time="2024-08-05T22:07:31.402088070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:31.403412 containerd[1548]: time="2024-08-05T22:07:31.403291074Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"17021805\" in 1.035748931s" Aug 5 22:07:31.403412 containerd[1548]: time="2024-08-05T22:07:31.403328059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\"" Aug 5 22:07:31.422179 containerd[1548]: time="2024-08-05T22:07:31.422146779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 22:07:32.394243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1924753105.mount: Deactivated successfully. Aug 5 22:07:32.706116 containerd[1548]: time="2024-08-05T22:07:32.705973037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:32.706888 containerd[1548]: time="2024-08-05T22:07:32.706741963Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=24977921" Aug 5 22:07:32.707531 containerd[1548]: time="2024-08-05T22:07:32.707491099Z" level=info msg="ImageCreate event name:\"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:32.710091 containerd[1548]: time="2024-08-05T22:07:32.709704134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:32.710717 containerd[1548]: time="2024-08-05T22:07:32.710271034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"24976938\" in 1.287953721s" Aug 5 22:07:32.710717 containerd[1548]: time="2024-08-05T22:07:32.710311695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\"" Aug 5 22:07:32.730336 containerd[1548]: time="2024-08-05T22:07:32.730287100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:07:33.138188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2051523085.mount: Deactivated successfully. Aug 5 22:07:33.142943 containerd[1548]: time="2024-08-05T22:07:33.142731514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:33.143617 containerd[1548]: time="2024-08-05T22:07:33.143580560Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Aug 5 22:07:33.144406 containerd[1548]: time="2024-08-05T22:07:33.144338245Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:33.146521 containerd[1548]: time="2024-08-05T22:07:33.146478285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:33.147499 containerd[1548]: time="2024-08-05T22:07:33.147463432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 417.133629ms" Aug 5 22:07:33.147499 containerd[1548]: time="2024-08-05T22:07:33.147497838Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 22:07:33.168280 containerd[1548]: time="2024-08-05T22:07:33.168238037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:07:33.720632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906738543.mount: Deactivated successfully. Aug 5 22:07:35.643585 containerd[1548]: time="2024-08-05T22:07:35.643505945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:35.644127 containerd[1548]: time="2024-08-05T22:07:35.644097706Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Aug 5 22:07:35.645180 containerd[1548]: time="2024-08-05T22:07:35.645138284Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:35.648244 containerd[1548]: time="2024-08-05T22:07:35.648201837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:35.649638 containerd[1548]: time="2024-08-05T22:07:35.649553250Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.481274202s" Aug 5 22:07:35.649638 containerd[1548]: time="2024-08-05T22:07:35.649592650Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 22:07:35.669784 containerd[1548]: time="2024-08-05T22:07:35.669742767Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 22:07:36.272933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1490375714.mount: Deactivated successfully. Aug 5 22:07:36.594910 containerd[1548]: time="2024-08-05T22:07:36.594787574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:36.595828 containerd[1548]: time="2024-08-05T22:07:36.595789505Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Aug 5 22:07:36.596491 containerd[1548]: time="2024-08-05T22:07:36.596459221Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:36.599649 containerd[1548]: time="2024-08-05T22:07:36.599412487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:07:36.600318 containerd[1548]: time="2024-08-05T22:07:36.600199787Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 930.414899ms" Aug 5 22:07:36.600318 containerd[1548]: time="2024-08-05T22:07:36.600238582Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Aug 5 22:07:40.141738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:07:40.155496 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:07:40.243708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:40.247489 (kubelet)[2186]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:07:40.287991 kubelet[2186]: E0805 22:07:40.287897 2186 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:07:40.290740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:07:40.290926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:07:41.903150 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:41.917289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:07:41.938445 systemd[1]: Reloading requested from client PID 2203 ('systemctl') (unit session-7.scope)... Aug 5 22:07:41.938460 systemd[1]: Reloading... Aug 5 22:07:42.027093 zram_generator::config[2238]: No configuration found. Aug 5 22:07:42.122709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:07:42.174673 systemd[1]: Reloading finished in 235 ms. Aug 5 22:07:42.209290 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:07:42.209358 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:07:42.209613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:42.211737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:07:42.303083 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:42.307255 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:07:42.349799 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:07:42.349799 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:07:42.349799 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:07:42.350182 kubelet[2298]: I0805 22:07:42.349842 2298 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:07:43.029102 kubelet[2298]: I0805 22:07:43.028561 2298 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:07:43.029102 kubelet[2298]: I0805 22:07:43.028589 2298 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:07:43.029102 kubelet[2298]: I0805 22:07:43.028804 2298 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:07:43.072589 kubelet[2298]: I0805 22:07:43.072438 2298 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:07:43.076162 kubelet[2298]: E0805 22:07:43.076123 2298 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.080825 kubelet[2298]: W0805 22:07:43.080790 2298 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 22:07:43.081544 kubelet[2298]: I0805 22:07:43.081523 2298 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:07:43.081867 kubelet[2298]: I0805 22:07:43.081845 2298 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:07:43.082030 kubelet[2298]: I0805 22:07:43.082016 2298 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:07:43.082126 kubelet[2298]: I0805 22:07:43.082040 2298 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:07:43.082126 kubelet[2298]: I0805 22:07:43.082049 2298 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:07:43.082253 kubelet[2298]: I0805 22:07:43.082239 2298 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:07:43.085204 kubelet[2298]: I0805 22:07:43.085179 2298 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:07:43.085246 kubelet[2298]: I0805 22:07:43.085208 2298 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:07:43.086822 kubelet[2298]: I0805 22:07:43.085361 2298 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:07:43.086822 kubelet[2298]: I0805 22:07:43.085382 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:07:43.086822 kubelet[2298]: W0805 22:07:43.085555 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.086822 kubelet[2298]: E0805 22:07:43.085602 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.087508 kubelet[2298]: I0805 22:07:43.087233 2298 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:07:43.089208 kubelet[2298]: W0805 22:07:43.089189 2298 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:07:43.089953 kubelet[2298]: W0805 22:07:43.089911 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.089953 kubelet[2298]: E0805 22:07:43.089952 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.090253 kubelet[2298]: I0805 22:07:43.090117 2298 server.go:1232] "Started kubelet" Aug 5 22:07:43.090608 kubelet[2298]: I0805 22:07:43.090583 2298 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:07:43.093407 kubelet[2298]: I0805 22:07:43.090596 2298 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:07:43.094092 kubelet[2298]: I0805 22:07:43.093610 2298 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:07:43.094092 kubelet[2298]: E0805 22:07:43.093729 2298 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:07:43.094092 kubelet[2298]: E0805 22:07:43.093765 2298 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:07:43.094512 kubelet[2298]: E0805 22:07:43.094395 2298 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17e8f47a2f8bbde2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 7, 43, 90089442, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 7, 43, 90089442, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.66:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.66:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:07:43.094682 kubelet[2298]: I0805 22:07:43.094665 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:07:43.095527 kubelet[2298]: I0805 22:07:43.094915 2298 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:07:43.095527 kubelet[2298]: I0805 22:07:43.095102 2298 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:07:43.096259 kubelet[2298]: I0805 22:07:43.096236 2298 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:07:43.096325 kubelet[2298]: I0805 22:07:43.096316 2298 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:07:43.096960 kubelet[2298]: W0805 22:07:43.096647 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.096960 kubelet[2298]: E0805 22:07:43.096689 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.096960 kubelet[2298]: E0805 22:07:43.096725 2298 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 5 22:07:43.097228 kubelet[2298]: E0805 22:07:43.097161 2298 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="200ms" Aug 5 22:07:43.108981 kubelet[2298]: I0805 22:07:43.108947 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:07:43.110088 kubelet[2298]: I0805 22:07:43.109888 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:07:43.110088 kubelet[2298]: I0805 22:07:43.109906 2298 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:07:43.110088 kubelet[2298]: I0805 22:07:43.109922 2298 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:07:43.110088 kubelet[2298]: E0805 22:07:43.109971 2298 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:07:43.114934 kubelet[2298]: W0805 22:07:43.114875 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.115017 kubelet[2298]: E0805 22:07:43.114940 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:43.130963 kubelet[2298]: I0805 22:07:43.130936 2298 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:07:43.130963 kubelet[2298]: I0805 22:07:43.130960 2298 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:07:43.131118 kubelet[2298]: I0805 22:07:43.130980 2298 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:07:43.198709 kubelet[2298]: I0805 22:07:43.198681 2298 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:07:43.199132 kubelet[2298]: E0805 22:07:43.199111 2298 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Aug 5 22:07:43.210236 kubelet[2298]: E0805 22:07:43.210201 2298 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:07:43.290734 kubelet[2298]: I0805 22:07:43.290624 2298 policy_none.go:49] "None policy: Start" Aug 5 22:07:43.291692 kubelet[2298]: I0805 22:07:43.291552 2298 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:07:43.291692 kubelet[2298]: I0805 22:07:43.291584 2298 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:07:43.296419 kubelet[2298]: I0805 22:07:43.296390 2298 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:07:43.297534 kubelet[2298]: I0805 22:07:43.297277 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:07:43.298004 kubelet[2298]: E0805 22:07:43.297909 2298 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="400ms" Aug 5 22:07:43.298004 kubelet[2298]: E0805 22:07:43.298008 2298 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 5 22:07:43.400503 kubelet[2298]: I0805 22:07:43.400470 2298 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:07:43.400892 kubelet[2298]: E0805 22:07:43.400851 2298 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Aug 5 22:07:43.411170 kubelet[2298]: I0805 22:07:43.411131 2298 topology_manager.go:215] "Topology Admit Handler" podUID="26fdf085cb8f5c4eb5de14618e480d58" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:07:43.412177 kubelet[2298]: I0805 22:07:43.412080 2298 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:07:43.413004 kubelet[2298]: I0805 22:07:43.412841 2298 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:07:43.497864 kubelet[2298]: I0805 22:07:43.497828 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:43.497864 kubelet[2298]: I0805 22:07:43.497872 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:43.498032 kubelet[2298]: I0805 22:07:43.497903 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:43.498032 kubelet[2298]: I0805 22:07:43.497928 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26fdf085cb8f5c4eb5de14618e480d58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"26fdf085cb8f5c4eb5de14618e480d58\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:07:43.498032 kubelet[2298]: I0805 22:07:43.497949 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26fdf085cb8f5c4eb5de14618e480d58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"26fdf085cb8f5c4eb5de14618e480d58\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:07:43.498032 kubelet[2298]: I0805 22:07:43.497967 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:43.498032 kubelet[2298]: I0805 22:07:43.497985 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26fdf085cb8f5c4eb5de14618e480d58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"26fdf085cb8f5c4eb5de14618e480d58\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:07:43.498161 kubelet[2298]: I0805 22:07:43.498004 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:43.498161 kubelet[2298]: I0805 22:07:43.498024 2298 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:07:43.698860 kubelet[2298]: E0805 22:07:43.698829 2298 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="800ms" Aug 5 22:07:43.716717 kubelet[2298]: E0805 22:07:43.716689 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:43.716815 kubelet[2298]: E0805 22:07:43.716687 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:43.717375 containerd[1548]: time="2024-08-05T22:07:43.717336285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,}" Aug 5 22:07:43.718031 containerd[1548]: time="2024-08-05T22:07:43.717363254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:26fdf085cb8f5c4eb5de14618e480d58,Namespace:kube-system,Attempt:0,}" Aug 5 22:07:43.719006 kubelet[2298]: E0805 22:07:43.718938 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:43.719284 containerd[1548]: time="2024-08-05T22:07:43.719257756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,}" Aug 5 22:07:43.805467 kubelet[2298]: I0805 22:07:43.805179 2298 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:07:43.805563 kubelet[2298]: E0805 22:07:43.805480 2298 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Aug 5 22:07:44.140585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927568785.mount: Deactivated successfully. Aug 5 22:07:44.144941 containerd[1548]: time="2024-08-05T22:07:44.144864386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:07:44.146486 containerd[1548]: time="2024-08-05T22:07:44.146452712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:07:44.147015 containerd[1548]: time="2024-08-05T22:07:44.146984354Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:07:44.148301 containerd[1548]: time="2024-08-05T22:07:44.148261345Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:07:44.148714 containerd[1548]: time="2024-08-05T22:07:44.148606490Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 5 22:07:44.149193 containerd[1548]: time="2024-08-05T22:07:44.149169903Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:07:44.150077 containerd[1548]: time="2024-08-05T22:07:44.150040289Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:07:44.152403 containerd[1548]: time="2024-08-05T22:07:44.152333590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 434.442472ms" Aug 5 22:07:44.154122 containerd[1548]: time="2024-08-05T22:07:44.153612301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:07:44.154557 containerd[1548]: time="2024-08-05T22:07:44.154516577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 435.187557ms" Aug 5 22:07:44.156259 containerd[1548]: time="2024-08-05T22:07:44.156210735Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 438.379638ms" Aug 5 22:07:44.164109 kubelet[2298]: W0805 22:07:44.164040 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.164186 kubelet[2298]: E0805 22:07:44.164121 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.306281 kubelet[2298]: W0805 22:07:44.306217 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.306281 kubelet[2298]: E0805 22:07:44.306276 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.336833 containerd[1548]: time="2024-08-05T22:07:44.336718604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:44.336833 containerd[1548]: time="2024-08-05T22:07:44.336776182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:44.337128 containerd[1548]: time="2024-08-05T22:07:44.336807191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:44.337128 containerd[1548]: time="2024-08-05T22:07:44.336829958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:44.337824 containerd[1548]: time="2024-08-05T22:07:44.337755801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:44.337824 containerd[1548]: time="2024-08-05T22:07:44.337802856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:44.337824 containerd[1548]: time="2024-08-05T22:07:44.337820501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:44.337824 containerd[1548]: time="2024-08-05T22:07:44.337836386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:44.338079 containerd[1548]: time="2024-08-05T22:07:44.337886161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:07:44.338079 containerd[1548]: time="2024-08-05T22:07:44.338032086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:44.338223 containerd[1548]: time="2024-08-05T22:07:44.338056413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:07:44.338223 containerd[1548]: time="2024-08-05T22:07:44.338081341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:07:44.384713 containerd[1548]: time="2024-08-05T22:07:44.384614088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:09d96cdeded1d5a51a9712d8a1a0b54a,Namespace:kube-system,Attempt:0,} returns sandbox id \"56a4ed9dd8865ecd55354e4c4121e742b9b7fa6fed2270f623769fb466a70866\"" Aug 5 22:07:44.386010 kubelet[2298]: E0805 22:07:44.385915 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:44.389407 containerd[1548]: time="2024-08-05T22:07:44.389120906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0cc03c154af91f38c5530287ae9cc549,Namespace:kube-system,Attempt:0,} returns sandbox id \"cde64b14b29dab6c3fbeb2c62c00eb1d9fe5d954ff418fe80e5031394f4a06dc\"" Aug 5 22:07:44.389485 containerd[1548]: time="2024-08-05T22:07:44.389418517Z" level=info msg="CreateContainer within sandbox \"56a4ed9dd8865ecd55354e4c4121e742b9b7fa6fed2270f623769fb466a70866\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:07:44.390068 kubelet[2298]: E0805 22:07:44.390047 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:44.392433 containerd[1548]: time="2024-08-05T22:07:44.392399388Z" level=info msg="CreateContainer within sandbox \"cde64b14b29dab6c3fbeb2c62c00eb1d9fe5d954ff418fe80e5031394f4a06dc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:07:44.395254 containerd[1548]: time="2024-08-05T22:07:44.395207687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:26fdf085cb8f5c4eb5de14618e480d58,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf68ee1bc62a314a8f5dd7a678f85fe1fccea122301bc91e23fca2d8f2425af8\"" Aug 5 22:07:44.395817 kubelet[2298]: E0805 22:07:44.395706 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:44.398397 containerd[1548]: time="2024-08-05T22:07:44.398365812Z" level=info msg="CreateContainer within sandbox \"cf68ee1bc62a314a8f5dd7a678f85fe1fccea122301bc91e23fca2d8f2425af8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:07:44.405915 containerd[1548]: time="2024-08-05T22:07:44.405878789Z" level=info msg="CreateContainer within sandbox \"56a4ed9dd8865ecd55354e4c4121e742b9b7fa6fed2270f623769fb466a70866\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b93337f34d3d23b6d1022d51a60c3dae4eec400cbbf2b22d03f10969008a81c4\"" Aug 5 22:07:44.406566 containerd[1548]: time="2024-08-05T22:07:44.406543352Z" level=info msg="StartContainer for \"b93337f34d3d23b6d1022d51a60c3dae4eec400cbbf2b22d03f10969008a81c4\"" Aug 5 22:07:44.409880 containerd[1548]: time="2024-08-05T22:07:44.409844442Z" level=info msg="CreateContainer within sandbox \"cde64b14b29dab6c3fbeb2c62c00eb1d9fe5d954ff418fe80e5031394f4a06dc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"807f2a467f3133a0dfdda121b42f9e8a44a46a2fffb2ebd10fe1145c55e6a971\"" Aug 5 22:07:44.410684 containerd[1548]: time="2024-08-05T22:07:44.410614597Z" level=info msg="StartContainer for \"807f2a467f3133a0dfdda121b42f9e8a44a46a2fffb2ebd10fe1145c55e6a971\"" Aug 5 22:07:44.416754 containerd[1548]: time="2024-08-05T22:07:44.416703899Z" level=info msg="CreateContainer within sandbox \"cf68ee1bc62a314a8f5dd7a678f85fe1fccea122301bc91e23fca2d8f2425af8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7428413d55488e6c362a753b1ee19865be676623fdd77ab7d4c776bf6d97823b\"" Aug 5 22:07:44.418165 containerd[1548]: time="2024-08-05T22:07:44.417146914Z" level=info msg="StartContainer for \"7428413d55488e6c362a753b1ee19865be676623fdd77ab7d4c776bf6d97823b\"" Aug 5 22:07:44.471027 containerd[1548]: time="2024-08-05T22:07:44.470956686Z" level=info msg="StartContainer for \"807f2a467f3133a0dfdda121b42f9e8a44a46a2fffb2ebd10fe1145c55e6a971\" returns successfully" Aug 5 22:07:44.471141 containerd[1548]: time="2024-08-05T22:07:44.471044633Z" level=info msg="StartContainer for \"b93337f34d3d23b6d1022d51a60c3dae4eec400cbbf2b22d03f10969008a81c4\" returns successfully" Aug 5 22:07:44.483856 kubelet[2298]: W0805 22:07:44.483692 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.483856 kubelet[2298]: E0805 22:07:44.483766 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.489191 containerd[1548]: time="2024-08-05T22:07:44.489135004Z" level=info msg="StartContainer for \"7428413d55488e6c362a753b1ee19865be676623fdd77ab7d4c776bf6d97823b\" returns successfully" Aug 5 22:07:44.492021 kubelet[2298]: W0805 22:07:44.491940 2298 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.492021 kubelet[2298]: E0805 22:07:44.491992 2298 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.66:6443: connect: connection refused Aug 5 22:07:44.499604 kubelet[2298]: E0805 22:07:44.499556 2298 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.66:6443: connect: connection refused" interval="1.6s" Aug 5 22:07:44.607686 kubelet[2298]: I0805 22:07:44.607358 2298 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:07:44.607686 kubelet[2298]: E0805 22:07:44.607651 2298 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.66:6443/api/v1/nodes\": dial tcp 10.0.0.66:6443: connect: connection refused" node="localhost" Aug 5 22:07:45.122815 kubelet[2298]: E0805 22:07:45.122713 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:45.129374 kubelet[2298]: E0805 22:07:45.129339 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:45.131310 kubelet[2298]: E0805 22:07:45.131287 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:46.134077 kubelet[2298]: E0805 22:07:46.134022 2298 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:46.210396 kubelet[2298]: I0805 22:07:46.209280 2298 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:07:46.386949 kubelet[2298]: I0805 22:07:46.386568 2298 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 22:07:47.089340 kubelet[2298]: I0805 22:07:47.089241 2298 apiserver.go:52] "Watching apiserver" Aug 5 22:07:47.096703 kubelet[2298]: I0805 22:07:47.096664 2298 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:07:48.928673 systemd[1]: Reloading requested from client PID 2580 ('systemctl') (unit session-7.scope)... Aug 5 22:07:48.928688 systemd[1]: Reloading... Aug 5 22:07:48.990150 zram_generator::config[2620]: No configuration found. Aug 5 22:07:49.072599 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:07:49.130610 systemd[1]: Reloading finished in 201 ms. Aug 5 22:07:49.159581 kubelet[2298]: I0805 22:07:49.159501 2298 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:07:49.159545 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:07:49.175349 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:07:49.175683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:49.186521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:07:49.325182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:07:49.329815 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:07:49.370054 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:07:49.370054 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:07:49.370054 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:07:49.370418 kubelet[2669]: I0805 22:07:49.370125 2669 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:07:49.375233 kubelet[2669]: I0805 22:07:49.374935 2669 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:07:49.375233 kubelet[2669]: I0805 22:07:49.374958 2669 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:07:49.375233 kubelet[2669]: I0805 22:07:49.375136 2669 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:07:49.376729 kubelet[2669]: I0805 22:07:49.376706 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:07:49.377746 kubelet[2669]: I0805 22:07:49.377710 2669 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:07:49.383237 kubelet[2669]: W0805 22:07:49.383208 2669 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 22:07:49.385597 kubelet[2669]: I0805 22:07:49.384219 2669 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:07:49.385597 kubelet[2669]: I0805 22:07:49.384708 2669 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:07:49.385597 kubelet[2669]: I0805 22:07:49.384861 2669 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:07:49.385597 kubelet[2669]: I0805 22:07:49.384880 2669 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:07:49.385597 kubelet[2669]: I0805 22:07:49.384889 2669 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:07:49.385597 kubelet[2669]: I0805 22:07:49.384918 2669 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.384997 2669 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.385010 2669 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.385032 2669 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.385042 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.385869 2669 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.386382 2669 server.go:1232] "Started kubelet" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.386953 2669 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.387187 2669 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.387226 2669 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:07:49.387618 kubelet[2669]: I0805 22:07:49.387593 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:07:49.390694 kubelet[2669]: I0805 22:07:49.387909 2669 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:07:49.390694 kubelet[2669]: I0805 22:07:49.388783 2669 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:07:49.390694 kubelet[2669]: I0805 22:07:49.390250 2669 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:07:49.390694 kubelet[2669]: I0805 22:07:49.390285 2669 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:07:49.394352 kubelet[2669]: E0805 22:07:49.394146 2669 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:07:49.399596 kubelet[2669]: E0805 22:07:49.398121 2669 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:07:49.420177 kubelet[2669]: I0805 22:07:49.420151 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:07:49.421315 kubelet[2669]: I0805 22:07:49.421288 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:07:49.421403 kubelet[2669]: I0805 22:07:49.421388 2669 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:07:49.421475 kubelet[2669]: I0805 22:07:49.421466 2669 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:07:49.421591 kubelet[2669]: E0805 22:07:49.421580 2669 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:07:49.483903 kubelet[2669]: I0805 22:07:49.483129 2669 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:07:49.483903 kubelet[2669]: I0805 22:07:49.483154 2669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:07:49.483903 kubelet[2669]: I0805 22:07:49.483173 2669 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:07:49.483903 kubelet[2669]: I0805 22:07:49.483334 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:07:49.483903 kubelet[2669]: I0805 22:07:49.483356 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:07:49.483903 kubelet[2669]: I0805 22:07:49.483362 2669 policy_none.go:49] "None policy: Start" Aug 5 22:07:49.484437 kubelet[2669]: I0805 22:07:49.484411 2669 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:07:49.484437 kubelet[2669]: I0805 22:07:49.484441 2669 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:07:49.484648 kubelet[2669]: I0805 22:07:49.484626 2669 state_mem.go:75] "Updated machine memory state" Aug 5 22:07:49.486511 kubelet[2669]: I0805 22:07:49.486477 2669 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:07:49.486723 kubelet[2669]: I0805 22:07:49.486706 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:07:49.492335 kubelet[2669]: I0805 22:07:49.492247 2669 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Aug 5 22:07:49.498546 kubelet[2669]: I0805 22:07:49.498517 2669 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Aug 5 22:07:49.498626 kubelet[2669]: I0805 22:07:49.498598 2669 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Aug 5 22:07:49.522300 kubelet[2669]: I0805 22:07:49.522262 2669 topology_manager.go:215] "Topology Admit Handler" podUID="26fdf085cb8f5c4eb5de14618e480d58" podNamespace="kube-system" podName="kube-apiserver-localhost" Aug 5 22:07:49.522451 kubelet[2669]: I0805 22:07:49.522374 2669 topology_manager.go:215] "Topology Admit Handler" podUID="09d96cdeded1d5a51a9712d8a1a0b54a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Aug 5 22:07:49.522451 kubelet[2669]: I0805 22:07:49.522418 2669 topology_manager.go:215] "Topology Admit Handler" podUID="0cc03c154af91f38c5530287ae9cc549" podNamespace="kube-system" podName="kube-scheduler-localhost" Aug 5 22:07:49.691944 kubelet[2669]: I0805 22:07:49.691898 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/26fdf085cb8f5c4eb5de14618e480d58-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"26fdf085cb8f5c4eb5de14618e480d58\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:07:49.691944 kubelet[2669]: I0805 22:07:49.691942 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:49.692121 kubelet[2669]: I0805 22:07:49.691965 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:49.692121 kubelet[2669]: I0805 22:07:49.691989 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:49.692121 kubelet[2669]: I0805 22:07:49.692011 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0cc03c154af91f38c5530287ae9cc549-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0cc03c154af91f38c5530287ae9cc549\") " pod="kube-system/kube-scheduler-localhost" Aug 5 22:07:49.692121 kubelet[2669]: I0805 22:07:49.692034 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/26fdf085cb8f5c4eb5de14618e480d58-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"26fdf085cb8f5c4eb5de14618e480d58\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:07:49.692121 kubelet[2669]: I0805 22:07:49.692099 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/26fdf085cb8f5c4eb5de14618e480d58-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"26fdf085cb8f5c4eb5de14618e480d58\") " pod="kube-system/kube-apiserver-localhost" Aug 5 22:07:49.692240 kubelet[2669]: I0805 22:07:49.692139 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:49.692240 kubelet[2669]: I0805 22:07:49.692168 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09d96cdeded1d5a51a9712d8a1a0b54a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"09d96cdeded1d5a51a9712d8a1a0b54a\") " pod="kube-system/kube-controller-manager-localhost" Aug 5 22:07:49.829988 kubelet[2669]: E0805 22:07:49.829874 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:49.842407 kubelet[2669]: E0805 22:07:49.842360 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:50.053355 kubelet[2669]: E0805 22:07:50.053278 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:50.388269 kubelet[2669]: I0805 22:07:50.388161 2669 apiserver.go:52] "Watching apiserver" Aug 5 22:07:50.390716 kubelet[2669]: I0805 22:07:50.390682 2669 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:07:50.433052 kubelet[2669]: E0805 22:07:50.433026 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:50.433275 kubelet[2669]: E0805 22:07:50.433258 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:50.437242 kubelet[2669]: E0805 22:07:50.437162 2669 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 5 22:07:50.437656 kubelet[2669]: E0805 22:07:50.437641 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:50.451244 kubelet[2669]: I0805 22:07:50.451109 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.4510449 podCreationTimestamp="2024-08-05 22:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:07:50.451024459 +0000 UTC m=+1.117793464" watchObservedRunningTime="2024-08-05 22:07:50.4510449 +0000 UTC m=+1.117813905" Aug 5 22:07:50.456894 kubelet[2669]: I0805 22:07:50.456760 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.456732519 podCreationTimestamp="2024-08-05 22:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:07:50.456704997 +0000 UTC m=+1.123474002" watchObservedRunningTime="2024-08-05 22:07:50.456732519 +0000 UTC m=+1.123501484" Aug 5 22:07:50.462299 kubelet[2669]: I0805 22:07:50.462267 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.462238523 podCreationTimestamp="2024-08-05 22:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:07:50.462000784 +0000 UTC m=+1.128769789" watchObservedRunningTime="2024-08-05 22:07:50.462238523 +0000 UTC m=+1.129007528" Aug 5 22:07:51.437189 kubelet[2669]: E0805 22:07:51.437148 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:54.655387 sudo[1762]: pam_unix(sudo:session): session closed for user root Aug 5 22:07:54.660762 sshd[1755]: pam_unix(sshd:session): session closed for user core Aug 5 22:07:54.664636 systemd[1]: sshd@6-10.0.0.66:22-10.0.0.1:47824.service: Deactivated successfully. Aug 5 22:07:54.666800 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:07:54.666842 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:07:54.668497 systemd-logind[1525]: Removed session 7. Aug 5 22:07:55.174653 kubelet[2669]: E0805 22:07:55.174484 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:55.442811 kubelet[2669]: E0805 22:07:55.442418 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:57.572511 kubelet[2669]: E0805 22:07:57.572438 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:58.026699 kubelet[2669]: E0805 22:07:58.026668 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:58.447035 kubelet[2669]: E0805 22:07:58.447006 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:07:58.447035 kubelet[2669]: E0805 22:07:58.447019 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:03.127300 update_engine[1534]: I0805 22:08:03.127244 1534 update_attempter.cc:509] Updating boot flags... Aug 5 22:08:03.147122 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2764) Aug 5 22:08:03.174119 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2768) Aug 5 22:08:04.732886 kubelet[2669]: I0805 22:08:04.732842 2669 topology_manager.go:215] "Topology Admit Handler" podUID="424413b0-3636-41ec-8417-7a7a5292611c" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-5tb4h" Aug 5 22:08:04.791683 kubelet[2669]: I0805 22:08:04.791628 2669 topology_manager.go:215] "Topology Admit Handler" podUID="02fb344c-90a6-4f9f-9eb2-68ca1f27b63d" podNamespace="kube-system" podName="kube-proxy-f5k24" Aug 5 22:08:04.795361 kubelet[2669]: I0805 22:08:04.793839 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/424413b0-3636-41ec-8417-7a7a5292611c-var-lib-calico\") pod \"tigera-operator-76c4974c85-5tb4h\" (UID: \"424413b0-3636-41ec-8417-7a7a5292611c\") " pod="tigera-operator/tigera-operator-76c4974c85-5tb4h" Aug 5 22:08:04.795361 kubelet[2669]: I0805 22:08:04.793881 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6qpr\" (UniqueName: \"kubernetes.io/projected/424413b0-3636-41ec-8417-7a7a5292611c-kube-api-access-g6qpr\") pod \"tigera-operator-76c4974c85-5tb4h\" (UID: \"424413b0-3636-41ec-8417-7a7a5292611c\") " pod="tigera-operator/tigera-operator-76c4974c85-5tb4h" Aug 5 22:08:04.864943 kubelet[2669]: I0805 22:08:04.864912 2669 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:08:04.865636 containerd[1548]: time="2024-08-05T22:08:04.865255039Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:08:04.865962 kubelet[2669]: I0805 22:08:04.865452 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:08:04.894743 kubelet[2669]: I0805 22:08:04.894707 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02fb344c-90a6-4f9f-9eb2-68ca1f27b63d-lib-modules\") pod \"kube-proxy-f5k24\" (UID: \"02fb344c-90a6-4f9f-9eb2-68ca1f27b63d\") " pod="kube-system/kube-proxy-f5k24" Aug 5 22:08:04.894825 kubelet[2669]: I0805 22:08:04.894773 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgft\" (UniqueName: \"kubernetes.io/projected/02fb344c-90a6-4f9f-9eb2-68ca1f27b63d-kube-api-access-llgft\") pod \"kube-proxy-f5k24\" (UID: \"02fb344c-90a6-4f9f-9eb2-68ca1f27b63d\") " pod="kube-system/kube-proxy-f5k24" Aug 5 22:08:04.894825 kubelet[2669]: I0805 22:08:04.894798 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02fb344c-90a6-4f9f-9eb2-68ca1f27b63d-kube-proxy\") pod \"kube-proxy-f5k24\" (UID: \"02fb344c-90a6-4f9f-9eb2-68ca1f27b63d\") " pod="kube-system/kube-proxy-f5k24" Aug 5 22:08:04.894825 kubelet[2669]: I0805 22:08:04.894818 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02fb344c-90a6-4f9f-9eb2-68ca1f27b63d-xtables-lock\") pod \"kube-proxy-f5k24\" (UID: \"02fb344c-90a6-4f9f-9eb2-68ca1f27b63d\") " pod="kube-system/kube-proxy-f5k24" Aug 5 22:08:05.035556 containerd[1548]: time="2024-08-05T22:08:05.035414526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-5tb4h,Uid:424413b0-3636-41ec-8417-7a7a5292611c,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:08:05.055575 containerd[1548]: time="2024-08-05T22:08:05.055138640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:05.055575 containerd[1548]: time="2024-08-05T22:08:05.055535894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:05.056133 containerd[1548]: time="2024-08-05T22:08:05.055561615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:05.056133 containerd[1548]: time="2024-08-05T22:08:05.055575775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:05.094498 kubelet[2669]: E0805 22:08:05.094269 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:05.095882 containerd[1548]: time="2024-08-05T22:08:05.095847753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-5tb4h,Uid:424413b0-3636-41ec-8417-7a7a5292611c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bdff9c550036c047c19080de63d2416334ebe852e4466c90bcc3007ee83199a3\"" Aug 5 22:08:05.096388 containerd[1548]: time="2024-08-05T22:08:05.096359812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f5k24,Uid:02fb344c-90a6-4f9f-9eb2-68ca1f27b63d,Namespace:kube-system,Attempt:0,}" Aug 5 22:08:05.099409 containerd[1548]: time="2024-08-05T22:08:05.099366441Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:08:05.117784 containerd[1548]: time="2024-08-05T22:08:05.117678504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:05.117897 containerd[1548]: time="2024-08-05T22:08:05.117853190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:05.118259 containerd[1548]: time="2024-08-05T22:08:05.118217883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:05.118306 containerd[1548]: time="2024-08-05T22:08:05.118272165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:05.147911 containerd[1548]: time="2024-08-05T22:08:05.147876917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f5k24,Uid:02fb344c-90a6-4f9f-9eb2-68ca1f27b63d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e82dcd3c7217adf41ca0a11292769fc3c6159b65fdab88d845156fe5f27ea32\"" Aug 5 22:08:05.148540 kubelet[2669]: E0805 22:08:05.148516 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:05.150882 containerd[1548]: time="2024-08-05T22:08:05.150756901Z" level=info msg="CreateContainer within sandbox \"8e82dcd3c7217adf41ca0a11292769fc3c6159b65fdab88d845156fe5f27ea32\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:08:05.164043 containerd[1548]: time="2024-08-05T22:08:05.163994821Z" level=info msg="CreateContainer within sandbox \"8e82dcd3c7217adf41ca0a11292769fc3c6159b65fdab88d845156fe5f27ea32\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82af1d3ceeb752d5849911e9f5c874f33133d3d4ad673b7221cf92748bc0620f\"" Aug 5 22:08:05.165085 containerd[1548]: time="2024-08-05T22:08:05.164685846Z" level=info msg="StartContainer for \"82af1d3ceeb752d5849911e9f5c874f33133d3d4ad673b7221cf92748bc0620f\"" Aug 5 22:08:05.219952 containerd[1548]: time="2024-08-05T22:08:05.218524475Z" level=info msg="StartContainer for \"82af1d3ceeb752d5849911e9f5c874f33133d3d4ad673b7221cf92748bc0620f\" returns successfully" Aug 5 22:08:05.462702 kubelet[2669]: E0805 22:08:05.462267 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:05.474411 kubelet[2669]: I0805 22:08:05.474193 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f5k24" podStartSLOduration=1.4741574100000001 podCreationTimestamp="2024-08-05 22:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:08:05.473998284 +0000 UTC m=+16.140767329" watchObservedRunningTime="2024-08-05 22:08:05.47415741 +0000 UTC m=+16.140926415" Aug 5 22:08:05.944630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798056593.mount: Deactivated successfully. Aug 5 22:08:06.219457 containerd[1548]: time="2024-08-05T22:08:06.219335976Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:06.220178 containerd[1548]: time="2024-08-05T22:08:06.220146204Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473630" Aug 5 22:08:06.221326 containerd[1548]: time="2024-08-05T22:08:06.221287363Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:06.224008 containerd[1548]: time="2024-08-05T22:08:06.223972736Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:06.224763 containerd[1548]: time="2024-08-05T22:08:06.224725482Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.12530888s" Aug 5 22:08:06.224803 containerd[1548]: time="2024-08-05T22:08:06.224764203Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 22:08:06.226987 containerd[1548]: time="2024-08-05T22:08:06.226945479Z" level=info msg="CreateContainer within sandbox \"bdff9c550036c047c19080de63d2416334ebe852e4466c90bcc3007ee83199a3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:08:06.237389 containerd[1548]: time="2024-08-05T22:08:06.237330597Z" level=info msg="CreateContainer within sandbox \"bdff9c550036c047c19080de63d2416334ebe852e4466c90bcc3007ee83199a3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fafa45d50264bd89c7f9c22805ba25364e594914f3516499bd8c1489f3b370a4\"" Aug 5 22:08:06.238133 containerd[1548]: time="2024-08-05T22:08:06.238097743Z" level=info msg="StartContainer for \"fafa45d50264bd89c7f9c22805ba25364e594914f3516499bd8c1489f3b370a4\"" Aug 5 22:08:06.331072 containerd[1548]: time="2024-08-05T22:08:06.331022189Z" level=info msg="StartContainer for \"fafa45d50264bd89c7f9c22805ba25364e594914f3516499bd8c1489f3b370a4\" returns successfully" Aug 5 22:08:06.475610 kubelet[2669]: I0805 22:08:06.475316 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-5tb4h" podStartSLOduration=1.3477594480000001 podCreationTimestamp="2024-08-05 22:08:04 +0000 UTC" firstStartedPulling="2024-08-05 22:08:05.097570536 +0000 UTC m=+15.764339541" lastFinishedPulling="2024-08-05 22:08:06.225043453 +0000 UTC m=+16.891812458" observedRunningTime="2024-08-05 22:08:06.474979396 +0000 UTC m=+17.141748361" watchObservedRunningTime="2024-08-05 22:08:06.475232365 +0000 UTC m=+17.142001370" Aug 5 22:08:09.986009 kubelet[2669]: I0805 22:08:09.985720 2669 topology_manager.go:215] "Topology Admit Handler" podUID="b119d758-a862-4b4d-bf22-3cd780b3cf90" podNamespace="calico-system" podName="calico-typha-6dbb57fdd5-zlhz2" Aug 5 22:08:10.022387 kubelet[2669]: I0805 22:08:10.022335 2669 topology_manager.go:215] "Topology Admit Handler" podUID="5aa9c697-fc4d-4c19-a18c-449312b7dd98" podNamespace="calico-system" podName="calico-node-jz2fc" Aug 5 22:08:10.028096 kubelet[2669]: I0805 22:08:10.027416 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b119d758-a862-4b4d-bf22-3cd780b3cf90-typha-certs\") pod \"calico-typha-6dbb57fdd5-zlhz2\" (UID: \"b119d758-a862-4b4d-bf22-3cd780b3cf90\") " pod="calico-system/calico-typha-6dbb57fdd5-zlhz2" Aug 5 22:08:10.028096 kubelet[2669]: I0805 22:08:10.027474 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzdm\" (UniqueName: \"kubernetes.io/projected/b119d758-a862-4b4d-bf22-3cd780b3cf90-kube-api-access-wlzdm\") pod \"calico-typha-6dbb57fdd5-zlhz2\" (UID: \"b119d758-a862-4b4d-bf22-3cd780b3cf90\") " pod="calico-system/calico-typha-6dbb57fdd5-zlhz2" Aug 5 22:08:10.028096 kubelet[2669]: I0805 22:08:10.027538 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b119d758-a862-4b4d-bf22-3cd780b3cf90-tigera-ca-bundle\") pod \"calico-typha-6dbb57fdd5-zlhz2\" (UID: \"b119d758-a862-4b4d-bf22-3cd780b3cf90\") " pod="calico-system/calico-typha-6dbb57fdd5-zlhz2" Aug 5 22:08:10.128644 kubelet[2669]: I0805 22:08:10.128139 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-cni-log-dir\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128644 kubelet[2669]: I0805 22:08:10.128205 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-cni-bin-dir\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128644 kubelet[2669]: I0805 22:08:10.128226 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5aa9c697-fc4d-4c19-a18c-449312b7dd98-tigera-ca-bundle\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128644 kubelet[2669]: I0805 22:08:10.128244 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5aa9c697-fc4d-4c19-a18c-449312b7dd98-node-certs\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128644 kubelet[2669]: I0805 22:08:10.128264 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-lib-modules\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128907 kubelet[2669]: I0805 22:08:10.128283 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-xtables-lock\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128907 kubelet[2669]: I0805 22:08:10.128300 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-policysync\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128907 kubelet[2669]: I0805 22:08:10.128318 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-cni-net-dir\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128907 kubelet[2669]: I0805 22:08:10.128339 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-flexvol-driver-host\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.128907 kubelet[2669]: I0805 22:08:10.128366 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-var-run-calico\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.129185 kubelet[2669]: I0805 22:08:10.128398 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5aa9c697-fc4d-4c19-a18c-449312b7dd98-var-lib-calico\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.129185 kubelet[2669]: I0805 22:08:10.128418 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhcm4\" (UniqueName: \"kubernetes.io/projected/5aa9c697-fc4d-4c19-a18c-449312b7dd98-kube-api-access-hhcm4\") pod \"calico-node-jz2fc\" (UID: \"5aa9c697-fc4d-4c19-a18c-449312b7dd98\") " pod="calico-system/calico-node-jz2fc" Aug 5 22:08:10.149024 kubelet[2669]: I0805 22:08:10.148982 2669 topology_manager.go:215] "Topology Admit Handler" podUID="a2005850-2566-453e-9840-897b314819a1" podNamespace="calico-system" podName="csi-node-driver-8gh6b" Aug 5 22:08:10.149335 kubelet[2669]: E0805 22:08:10.149281 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gh6b" podUID="a2005850-2566-453e-9840-897b314819a1" Aug 5 22:08:10.229967 kubelet[2669]: I0805 22:08:10.229858 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2005850-2566-453e-9840-897b314819a1-registration-dir\") pod \"csi-node-driver-8gh6b\" (UID: \"a2005850-2566-453e-9840-897b314819a1\") " pod="calico-system/csi-node-driver-8gh6b" Aug 5 22:08:10.229967 kubelet[2669]: I0805 22:08:10.229971 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chmkx\" (UniqueName: \"kubernetes.io/projected/a2005850-2566-453e-9840-897b314819a1-kube-api-access-chmkx\") pod \"csi-node-driver-8gh6b\" (UID: \"a2005850-2566-453e-9840-897b314819a1\") " pod="calico-system/csi-node-driver-8gh6b" Aug 5 22:08:10.230142 kubelet[2669]: I0805 22:08:10.230104 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a2005850-2566-453e-9840-897b314819a1-varrun\") pod \"csi-node-driver-8gh6b\" (UID: \"a2005850-2566-453e-9840-897b314819a1\") " pod="calico-system/csi-node-driver-8gh6b" Aug 5 22:08:10.230169 kubelet[2669]: I0805 22:08:10.230141 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2005850-2566-453e-9840-897b314819a1-socket-dir\") pod \"csi-node-driver-8gh6b\" (UID: \"a2005850-2566-453e-9840-897b314819a1\") " pod="calico-system/csi-node-driver-8gh6b" Aug 5 22:08:10.231733 kubelet[2669]: I0805 22:08:10.230210 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a2005850-2566-453e-9840-897b314819a1-kubelet-dir\") pod \"csi-node-driver-8gh6b\" (UID: \"a2005850-2566-453e-9840-897b314819a1\") " pod="calico-system/csi-node-driver-8gh6b" Aug 5 22:08:10.233757 kubelet[2669]: E0805 22:08:10.233653 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.233827 kubelet[2669]: W0805 22:08:10.233766 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.233827 kubelet[2669]: E0805 22:08:10.233793 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.237130 kubelet[2669]: E0805 22:08:10.234139 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.237130 kubelet[2669]: W0805 22:08:10.234166 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.237130 kubelet[2669]: E0805 22:08:10.234188 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.237130 kubelet[2669]: E0805 22:08:10.234375 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.237130 kubelet[2669]: W0805 22:08:10.234383 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.237130 kubelet[2669]: E0805 22:08:10.234420 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.237130 kubelet[2669]: E0805 22:08:10.234858 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.237130 kubelet[2669]: W0805 22:08:10.234867 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.237130 kubelet[2669]: E0805 22:08:10.234980 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.237130 kubelet[2669]: E0805 22:08:10.235272 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.238096 kubelet[2669]: W0805 22:08:10.235281 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.238096 kubelet[2669]: E0805 22:08:10.235295 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.238096 kubelet[2669]: E0805 22:08:10.235651 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.238096 kubelet[2669]: W0805 22:08:10.235661 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.238096 kubelet[2669]: E0805 22:08:10.235679 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.238096 kubelet[2669]: E0805 22:08:10.235975 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.238096 kubelet[2669]: W0805 22:08:10.235983 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.238096 kubelet[2669]: E0805 22:08:10.236001 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.238096 kubelet[2669]: E0805 22:08:10.236246 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.238096 kubelet[2669]: W0805 22:08:10.236254 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.238297 kubelet[2669]: E0805 22:08:10.236271 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.238297 kubelet[2669]: E0805 22:08:10.236450 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.238297 kubelet[2669]: W0805 22:08:10.236469 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.238297 kubelet[2669]: E0805 22:08:10.236484 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.264209 kubelet[2669]: E0805 22:08:10.264176 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.264209 kubelet[2669]: W0805 22:08:10.264198 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.264209 kubelet[2669]: E0805 22:08:10.264217 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.291518 kubelet[2669]: E0805 22:08:10.291488 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:10.292588 containerd[1548]: time="2024-08-05T22:08:10.292362264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dbb57fdd5-zlhz2,Uid:b119d758-a862-4b4d-bf22-3cd780b3cf90,Namespace:calico-system,Attempt:0,}" Aug 5 22:08:10.320057 containerd[1548]: time="2024-08-05T22:08:10.319912415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:10.320057 containerd[1548]: time="2024-08-05T22:08:10.319985497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:10.320057 containerd[1548]: time="2024-08-05T22:08:10.319999337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:10.320333 containerd[1548]: time="2024-08-05T22:08:10.320109180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:10.329495 kubelet[2669]: E0805 22:08:10.329438 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:10.331171 containerd[1548]: time="2024-08-05T22:08:10.330295873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jz2fc,Uid:5aa9c697-fc4d-4c19-a18c-449312b7dd98,Namespace:calico-system,Attempt:0,}" Aug 5 22:08:10.331694 kubelet[2669]: E0805 22:08:10.331671 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.332091 kubelet[2669]: W0805 22:08:10.332049 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.332331 kubelet[2669]: E0805 22:08:10.332313 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.333257 kubelet[2669]: E0805 22:08:10.333241 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.333391 kubelet[2669]: W0805 22:08:10.333375 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.333476 kubelet[2669]: E0805 22:08:10.333466 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.334058 kubelet[2669]: E0805 22:08:10.334016 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.334368 kubelet[2669]: W0805 22:08:10.334030 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.334368 kubelet[2669]: E0805 22:08:10.334343 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.335508 kubelet[2669]: E0805 22:08:10.335383 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.335508 kubelet[2669]: W0805 22:08:10.335451 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.335508 kubelet[2669]: E0805 22:08:10.335497 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.336191 kubelet[2669]: E0805 22:08:10.336026 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.336191 kubelet[2669]: W0805 22:08:10.336039 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.336191 kubelet[2669]: E0805 22:08:10.336165 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.336612 kubelet[2669]: E0805 22:08:10.336599 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.336711 kubelet[2669]: W0805 22:08:10.336685 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.336840 kubelet[2669]: E0805 22:08:10.336821 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.337141 kubelet[2669]: E0805 22:08:10.337051 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.337306 kubelet[2669]: W0805 22:08:10.337240 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.338151 kubelet[2669]: E0805 22:08:10.338040 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.338561 kubelet[2669]: E0805 22:08:10.338388 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.338561 kubelet[2669]: W0805 22:08:10.338490 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.338752 kubelet[2669]: E0805 22:08:10.338713 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.339122 kubelet[2669]: E0805 22:08:10.339052 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.339122 kubelet[2669]: W0805 22:08:10.339087 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.339309 kubelet[2669]: E0805 22:08:10.339266 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.340018 kubelet[2669]: E0805 22:08:10.339798 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.340018 kubelet[2669]: W0805 22:08:10.339810 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.340303 kubelet[2669]: E0805 22:08:10.340219 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.340788 kubelet[2669]: E0805 22:08:10.340758 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.340918 kubelet[2669]: W0805 22:08:10.340870 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.341012 kubelet[2669]: E0805 22:08:10.340970 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.341282 kubelet[2669]: E0805 22:08:10.341262 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.341558 kubelet[2669]: W0805 22:08:10.341424 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.341558 kubelet[2669]: E0805 22:08:10.341487 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.341726 kubelet[2669]: E0805 22:08:10.341715 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.341901 kubelet[2669]: W0805 22:08:10.341774 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.342309 kubelet[2669]: E0805 22:08:10.342166 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.344284 kubelet[2669]: E0805 22:08:10.344267 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.344467 kubelet[2669]: W0805 22:08:10.344405 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.344572 kubelet[2669]: E0805 22:08:10.344544 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.344961 kubelet[2669]: E0805 22:08:10.344851 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.344961 kubelet[2669]: W0805 22:08:10.344936 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.345294 kubelet[2669]: E0805 22:08:10.345241 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.345627 kubelet[2669]: E0805 22:08:10.345615 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.346274 kubelet[2669]: W0805 22:08:10.346251 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.346541 kubelet[2669]: E0805 22:08:10.346482 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.347202 kubelet[2669]: E0805 22:08:10.347105 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.347202 kubelet[2669]: W0805 22:08:10.347118 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.347386 kubelet[2669]: E0805 22:08:10.347338 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.347494 kubelet[2669]: E0805 22:08:10.347485 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.347683 kubelet[2669]: W0805 22:08:10.347633 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.347905 kubelet[2669]: E0805 22:08:10.347746 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.348360 kubelet[2669]: E0805 22:08:10.348336 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.348566 kubelet[2669]: W0805 22:08:10.348436 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.348684 kubelet[2669]: E0805 22:08:10.348672 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.348973 kubelet[2669]: E0805 22:08:10.348959 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.349049 kubelet[2669]: W0805 22:08:10.349037 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.349450 kubelet[2669]: E0805 22:08:10.349412 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.349745 kubelet[2669]: E0805 22:08:10.349673 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.349745 kubelet[2669]: W0805 22:08:10.349685 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.349949 kubelet[2669]: E0805 22:08:10.349853 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.350312 kubelet[2669]: E0805 22:08:10.350289 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.350433 kubelet[2669]: W0805 22:08:10.350385 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.350609 kubelet[2669]: E0805 22:08:10.350521 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.350734 kubelet[2669]: E0805 22:08:10.350724 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.350866 kubelet[2669]: W0805 22:08:10.350797 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.350976 kubelet[2669]: E0805 22:08:10.350963 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.351546 kubelet[2669]: E0805 22:08:10.351405 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.351546 kubelet[2669]: W0805 22:08:10.351420 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.351546 kubelet[2669]: E0805 22:08:10.351437 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.352611 kubelet[2669]: E0805 22:08:10.352298 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.352611 kubelet[2669]: W0805 22:08:10.352314 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.352611 kubelet[2669]: E0805 22:08:10.352328 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.363045 containerd[1548]: time="2024-08-05T22:08:10.362835526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:10.363045 containerd[1548]: time="2024-08-05T22:08:10.363005051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:10.363045 containerd[1548]: time="2024-08-05T22:08:10.363026532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:10.363045 containerd[1548]: time="2024-08-05T22:08:10.363043052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:10.371560 kubelet[2669]: E0805 22:08:10.371536 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:10.372005 kubelet[2669]: W0805 22:08:10.371986 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:10.372128 kubelet[2669]: E0805 22:08:10.372113 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:10.435938 containerd[1548]: time="2024-08-05T22:08:10.435902183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jz2fc,Uid:5aa9c697-fc4d-4c19-a18c-449312b7dd98,Namespace:calico-system,Attempt:0,} returns sandbox id \"52c8e0de2e7575aa15686ecd3e1b18634bf0b68a95c2349ab183dcc5d9c67884\"" Aug 5 22:08:10.439312 containerd[1548]: time="2024-08-05T22:08:10.436329675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dbb57fdd5-zlhz2,Uid:b119d758-a862-4b4d-bf22-3cd780b3cf90,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b26f469e81b2585297fd285bdea416c01ffc23e8b9461f2e19957132867c8cc\"" Aug 5 22:08:10.441141 kubelet[2669]: E0805 22:08:10.441121 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:10.444563 kubelet[2669]: E0805 22:08:10.441424 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:10.453408 containerd[1548]: time="2024-08-05T22:08:10.453349083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:08:11.705214 containerd[1548]: time="2024-08-05T22:08:11.705159691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:11.705736 containerd[1548]: time="2024-08-05T22:08:11.705692066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 22:08:11.706567 containerd[1548]: time="2024-08-05T22:08:11.706531409Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:11.708764 containerd[1548]: time="2024-08-05T22:08:11.708718909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:11.709480 containerd[1548]: time="2024-08-05T22:08:11.709324925Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 1.255771316s" Aug 5 22:08:11.709480 containerd[1548]: time="2024-08-05T22:08:11.709356766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 22:08:11.710476 containerd[1548]: time="2024-08-05T22:08:11.710445876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:08:11.720206 containerd[1548]: time="2024-08-05T22:08:11.720165303Z" level=info msg="CreateContainer within sandbox \"3b26f469e81b2585297fd285bdea416c01ffc23e8b9461f2e19957132867c8cc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:08:11.752989 containerd[1548]: time="2024-08-05T22:08:11.752829440Z" level=info msg="CreateContainer within sandbox \"3b26f469e81b2585297fd285bdea416c01ffc23e8b9461f2e19957132867c8cc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"17d2861538e242faecf56b0965b90ebb278c4a9ee90385fbb62a8db6499c40ae\"" Aug 5 22:08:11.754168 containerd[1548]: time="2024-08-05T22:08:11.754137596Z" level=info msg="StartContainer for \"17d2861538e242faecf56b0965b90ebb278c4a9ee90385fbb62a8db6499c40ae\"" Aug 5 22:08:11.812363 containerd[1548]: time="2024-08-05T22:08:11.812315113Z" level=info msg="StartContainer for \"17d2861538e242faecf56b0965b90ebb278c4a9ee90385fbb62a8db6499c40ae\" returns successfully" Aug 5 22:08:12.422041 kubelet[2669]: E0805 22:08:12.421976 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gh6b" podUID="a2005850-2566-453e-9840-897b314819a1" Aug 5 22:08:12.488305 kubelet[2669]: E0805 22:08:12.488261 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:12.500562 kubelet[2669]: I0805 22:08:12.500527 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6dbb57fdd5-zlhz2" podStartSLOduration=2.243084149 podCreationTimestamp="2024-08-05 22:08:09 +0000 UTC" firstStartedPulling="2024-08-05 22:08:10.452255852 +0000 UTC m=+21.119024857" lastFinishedPulling="2024-08-05 22:08:11.709662415 +0000 UTC m=+22.376431420" observedRunningTime="2024-08-05 22:08:12.499821414 +0000 UTC m=+23.166590419" watchObservedRunningTime="2024-08-05 22:08:12.500490712 +0000 UTC m=+23.167259717" Aug 5 22:08:12.546050 kubelet[2669]: E0805 22:08:12.545920 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.546050 kubelet[2669]: W0805 22:08:12.545943 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.546050 kubelet[2669]: E0805 22:08:12.545969 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.546535 kubelet[2669]: E0805 22:08:12.546423 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.546535 kubelet[2669]: W0805 22:08:12.546437 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.546535 kubelet[2669]: E0805 22:08:12.546451 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.546768 kubelet[2669]: E0805 22:08:12.546699 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.546898 kubelet[2669]: W0805 22:08:12.546802 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.546898 kubelet[2669]: E0805 22:08:12.546822 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.547035 kubelet[2669]: E0805 22:08:12.547024 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.547120 kubelet[2669]: W0805 22:08:12.547108 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.547181 kubelet[2669]: E0805 22:08:12.547172 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.548055 kubelet[2669]: E0805 22:08:12.547936 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.548055 kubelet[2669]: W0805 22:08:12.547953 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.548055 kubelet[2669]: E0805 22:08:12.547968 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.548743 kubelet[2669]: E0805 22:08:12.548357 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.548743 kubelet[2669]: W0805 22:08:12.548370 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.548743 kubelet[2669]: E0805 22:08:12.548384 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.550788 kubelet[2669]: E0805 22:08:12.549817 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.550788 kubelet[2669]: W0805 22:08:12.550777 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.550904 kubelet[2669]: E0805 22:08:12.550796 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.551168 kubelet[2669]: E0805 22:08:12.551039 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.551168 kubelet[2669]: W0805 22:08:12.551054 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.551168 kubelet[2669]: E0805 22:08:12.551080 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.551503 kubelet[2669]: E0805 22:08:12.551283 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.551503 kubelet[2669]: W0805 22:08:12.551294 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.551503 kubelet[2669]: E0805 22:08:12.551305 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.551993 kubelet[2669]: E0805 22:08:12.551869 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.551993 kubelet[2669]: W0805 22:08:12.551882 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.551993 kubelet[2669]: E0805 22:08:12.551901 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.552266 kubelet[2669]: E0805 22:08:12.552254 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.552375 kubelet[2669]: W0805 22:08:12.552319 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.552375 kubelet[2669]: E0805 22:08:12.552336 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.552700 kubelet[2669]: E0805 22:08:12.552666 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.553040 kubelet[2669]: W0805 22:08:12.552958 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.553040 kubelet[2669]: E0805 22:08:12.552983 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.553449 kubelet[2669]: E0805 22:08:12.553435 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.553580 kubelet[2669]: W0805 22:08:12.553512 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.553580 kubelet[2669]: E0805 22:08:12.553532 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.553902 kubelet[2669]: E0805 22:08:12.553833 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.553902 kubelet[2669]: W0805 22:08:12.553845 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.553902 kubelet[2669]: E0805 22:08:12.553860 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.554253 kubelet[2669]: E0805 22:08:12.554180 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.554253 kubelet[2669]: W0805 22:08:12.554192 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.554253 kubelet[2669]: E0805 22:08:12.554204 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.559850 kubelet[2669]: E0805 22:08:12.559817 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.560228 kubelet[2669]: W0805 22:08:12.560023 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.560228 kubelet[2669]: E0805 22:08:12.560054 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.560649 kubelet[2669]: E0805 22:08:12.560385 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.560649 kubelet[2669]: W0805 22:08:12.560427 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.560649 kubelet[2669]: E0805 22:08:12.560448 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.560753 kubelet[2669]: E0805 22:08:12.560709 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.560753 kubelet[2669]: W0805 22:08:12.560722 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.560753 kubelet[2669]: E0805 22:08:12.560737 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.561086 kubelet[2669]: E0805 22:08:12.560908 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.561086 kubelet[2669]: W0805 22:08:12.560919 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.561086 kubelet[2669]: E0805 22:08:12.560929 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.561547 kubelet[2669]: E0805 22:08:12.561513 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.561547 kubelet[2669]: W0805 22:08:12.561532 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.561755 kubelet[2669]: E0805 22:08:12.561654 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.562128 kubelet[2669]: E0805 22:08:12.562055 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.562387 kubelet[2669]: W0805 22:08:12.562215 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.562387 kubelet[2669]: E0805 22:08:12.562240 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.562814 kubelet[2669]: E0805 22:08:12.562785 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.562814 kubelet[2669]: W0805 22:08:12.562800 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.563100 kubelet[2669]: E0805 22:08:12.562946 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.563513 kubelet[2669]: E0805 22:08:12.563454 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.563676 kubelet[2669]: W0805 22:08:12.563593 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.563725 kubelet[2669]: E0805 22:08:12.563682 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.563999 kubelet[2669]: E0805 22:08:12.563924 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.563999 kubelet[2669]: W0805 22:08:12.563936 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.564136 kubelet[2669]: E0805 22:08:12.564031 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.564605 kubelet[2669]: E0805 22:08:12.564471 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.564605 kubelet[2669]: W0805 22:08:12.564490 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.564605 kubelet[2669]: E0805 22:08:12.564508 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.565092 kubelet[2669]: E0805 22:08:12.564942 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.565092 kubelet[2669]: W0805 22:08:12.564957 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.565092 kubelet[2669]: E0805 22:08:12.564981 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.565412 kubelet[2669]: E0805 22:08:12.565310 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.565412 kubelet[2669]: W0805 22:08:12.565322 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.565412 kubelet[2669]: E0805 22:08:12.565342 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.565943 kubelet[2669]: E0805 22:08:12.565885 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.565943 kubelet[2669]: W0805 22:08:12.565899 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.566232 kubelet[2669]: E0805 22:08:12.566100 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.566574 kubelet[2669]: E0805 22:08:12.566560 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.566841 kubelet[2669]: W0805 22:08:12.566727 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.566841 kubelet[2669]: E0805 22:08:12.566751 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.567115 kubelet[2669]: E0805 22:08:12.567101 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.567354 kubelet[2669]: W0805 22:08:12.567200 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.567447 kubelet[2669]: E0805 22:08:12.567433 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.567756 kubelet[2669]: E0805 22:08:12.567723 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.567756 kubelet[2669]: W0805 22:08:12.567737 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.567980 kubelet[2669]: E0805 22:08:12.567827 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.568151 kubelet[2669]: E0805 22:08:12.568138 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.568668 kubelet[2669]: W0805 22:08:12.568247 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.568668 kubelet[2669]: E0805 22:08:12.568269 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.569209 kubelet[2669]: E0805 22:08:12.569190 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:08:12.569367 kubelet[2669]: W0805 22:08:12.569342 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:08:12.569531 kubelet[2669]: E0805 22:08:12.569515 2669 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:08:12.636017 containerd[1548]: time="2024-08-05T22:08:12.635953955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:12.637152 containerd[1548]: time="2024-08-05T22:08:12.637102345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 22:08:12.638198 containerd[1548]: time="2024-08-05T22:08:12.638140293Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:12.640227 containerd[1548]: time="2024-08-05T22:08:12.640192347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:12.642587 containerd[1548]: time="2024-08-05T22:08:12.641203813Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 930.714256ms" Aug 5 22:08:12.642587 containerd[1548]: time="2024-08-05T22:08:12.641244334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 22:08:12.643261 containerd[1548]: time="2024-08-05T22:08:12.643231026Z" level=info msg="CreateContainer within sandbox \"52c8e0de2e7575aa15686ecd3e1b18634bf0b68a95c2349ab183dcc5d9c67884\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:08:12.655547 containerd[1548]: time="2024-08-05T22:08:12.655485109Z" level=info msg="CreateContainer within sandbox \"52c8e0de2e7575aa15686ecd3e1b18634bf0b68a95c2349ab183dcc5d9c67884\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"702c9f13c18ad0b7e70f66401d841ae4e6817ec43be687266517e8aac7978d57\"" Aug 5 22:08:12.656227 containerd[1548]: time="2024-08-05T22:08:12.656194447Z" level=info msg="StartContainer for \"702c9f13c18ad0b7e70f66401d841ae4e6817ec43be687266517e8aac7978d57\"" Aug 5 22:08:12.715172 containerd[1548]: time="2024-08-05T22:08:12.715030395Z" level=info msg="StartContainer for \"702c9f13c18ad0b7e70f66401d841ae4e6817ec43be687266517e8aac7978d57\" returns successfully" Aug 5 22:08:12.835952 containerd[1548]: time="2024-08-05T22:08:12.835870813Z" level=info msg="shim disconnected" id=702c9f13c18ad0b7e70f66401d841ae4e6817ec43be687266517e8aac7978d57 namespace=k8s.io Aug 5 22:08:12.835952 containerd[1548]: time="2024-08-05T22:08:12.835944215Z" level=warning msg="cleaning up after shim disconnected" id=702c9f13c18ad0b7e70f66401d841ae4e6817ec43be687266517e8aac7978d57 namespace=k8s.io Aug 5 22:08:12.835952 containerd[1548]: time="2024-08-05T22:08:12.835959336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:08:13.137989 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-702c9f13c18ad0b7e70f66401d841ae4e6817ec43be687266517e8aac7978d57-rootfs.mount: Deactivated successfully. Aug 5 22:08:13.490157 kubelet[2669]: I0805 22:08:13.490107 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:08:13.491414 kubelet[2669]: E0805 22:08:13.490958 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:13.491466 kubelet[2669]: E0805 22:08:13.491434 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:13.492966 containerd[1548]: time="2024-08-05T22:08:13.492700997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:08:14.422267 kubelet[2669]: E0805 22:08:14.422225 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gh6b" podUID="a2005850-2566-453e-9840-897b314819a1" Aug 5 22:08:15.191492 kubelet[2669]: I0805 22:08:15.191444 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:08:15.192139 kubelet[2669]: E0805 22:08:15.192118 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:15.498877 kubelet[2669]: E0805 22:08:15.498360 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:15.578861 containerd[1548]: time="2024-08-05T22:08:15.578820293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:15.579718 containerd[1548]: time="2024-08-05T22:08:15.579610032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 22:08:15.580597 containerd[1548]: time="2024-08-05T22:08:15.580394810Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:15.582392 containerd[1548]: time="2024-08-05T22:08:15.582348375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:15.583974 containerd[1548]: time="2024-08-05T22:08:15.583091953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 2.090347794s" Aug 5 22:08:15.583974 containerd[1548]: time="2024-08-05T22:08:15.583121033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 22:08:15.586207 containerd[1548]: time="2024-08-05T22:08:15.586180744Z" level=info msg="CreateContainer within sandbox \"52c8e0de2e7575aa15686ecd3e1b18634bf0b68a95c2349ab183dcc5d9c67884\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:08:15.597022 containerd[1548]: time="2024-08-05T22:08:15.596979956Z" level=info msg="CreateContainer within sandbox \"52c8e0de2e7575aa15686ecd3e1b18634bf0b68a95c2349ab183dcc5d9c67884\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5361271cdc1b82a2c7ecf5253d1a083441dd9971008a5c7ce8373eb759024e17\"" Aug 5 22:08:15.598177 containerd[1548]: time="2024-08-05T22:08:15.598157463Z" level=info msg="StartContainer for \"5361271cdc1b82a2c7ecf5253d1a083441dd9971008a5c7ce8373eb759024e17\"" Aug 5 22:08:15.620975 systemd[1]: run-containerd-runc-k8s.io-5361271cdc1b82a2c7ecf5253d1a083441dd9971008a5c7ce8373eb759024e17-runc.AB21tX.mount: Deactivated successfully. Aug 5 22:08:15.648870 containerd[1548]: time="2024-08-05T22:08:15.648823121Z" level=info msg="StartContainer for \"5361271cdc1b82a2c7ecf5253d1a083441dd9971008a5c7ce8373eb759024e17\" returns successfully" Aug 5 22:08:16.168949 containerd[1548]: time="2024-08-05T22:08:16.168882663Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:08:16.181991 kubelet[2669]: I0805 22:08:16.181801 2669 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 22:08:16.192371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5361271cdc1b82a2c7ecf5253d1a083441dd9971008a5c7ce8373eb759024e17-rootfs.mount: Deactivated successfully. Aug 5 22:08:16.196406 containerd[1548]: time="2024-08-05T22:08:16.196347237Z" level=info msg="shim disconnected" id=5361271cdc1b82a2c7ecf5253d1a083441dd9971008a5c7ce8373eb759024e17 namespace=k8s.io Aug 5 22:08:16.196406 containerd[1548]: time="2024-08-05T22:08:16.196403878Z" level=warning msg="cleaning up after shim disconnected" id=5361271cdc1b82a2c7ecf5253d1a083441dd9971008a5c7ce8373eb759024e17 namespace=k8s.io Aug 5 22:08:16.196597 containerd[1548]: time="2024-08-05T22:08:16.196413318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:08:16.206416 kubelet[2669]: I0805 22:08:16.206373 2669 topology_manager.go:215] "Topology Admit Handler" podUID="af2ff405-41a2-41af-be9d-df5d15c0abae" podNamespace="calico-system" podName="calico-kube-controllers-fb8459779-ndctr" Aug 5 22:08:16.212473 kubelet[2669]: I0805 22:08:16.212447 2669 topology_manager.go:215] "Topology Admit Handler" podUID="ed3c9326-5dc5-42ed-b57b-01a76d3c682c" podNamespace="kube-system" podName="coredns-5dd5756b68-swc99" Aug 5 22:08:16.212601 kubelet[2669]: I0805 22:08:16.212584 2669 topology_manager.go:215] "Topology Admit Handler" podUID="d1686794-670e-4b6f-b373-abdfd1581032" podNamespace="kube-system" podName="coredns-5dd5756b68-bg8wn" Aug 5 22:08:16.386230 kubelet[2669]: I0805 22:08:16.386188 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1686794-670e-4b6f-b373-abdfd1581032-config-volume\") pod \"coredns-5dd5756b68-bg8wn\" (UID: \"d1686794-670e-4b6f-b373-abdfd1581032\") " pod="kube-system/coredns-5dd5756b68-bg8wn" Aug 5 22:08:16.386230 kubelet[2669]: I0805 22:08:16.386241 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed3c9326-5dc5-42ed-b57b-01a76d3c682c-config-volume\") pod \"coredns-5dd5756b68-swc99\" (UID: \"ed3c9326-5dc5-42ed-b57b-01a76d3c682c\") " pod="kube-system/coredns-5dd5756b68-swc99" Aug 5 22:08:16.386399 kubelet[2669]: I0805 22:08:16.386270 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmlq\" (UniqueName: \"kubernetes.io/projected/af2ff405-41a2-41af-be9d-df5d15c0abae-kube-api-access-lrmlq\") pod \"calico-kube-controllers-fb8459779-ndctr\" (UID: \"af2ff405-41a2-41af-be9d-df5d15c0abae\") " pod="calico-system/calico-kube-controllers-fb8459779-ndctr" Aug 5 22:08:16.386399 kubelet[2669]: I0805 22:08:16.386307 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpxmn\" (UniqueName: \"kubernetes.io/projected/ed3c9326-5dc5-42ed-b57b-01a76d3c682c-kube-api-access-wpxmn\") pod \"coredns-5dd5756b68-swc99\" (UID: \"ed3c9326-5dc5-42ed-b57b-01a76d3c682c\") " pod="kube-system/coredns-5dd5756b68-swc99" Aug 5 22:08:16.386399 kubelet[2669]: I0805 22:08:16.386330 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjmsv\" (UniqueName: \"kubernetes.io/projected/d1686794-670e-4b6f-b373-abdfd1581032-kube-api-access-xjmsv\") pod \"coredns-5dd5756b68-bg8wn\" (UID: \"d1686794-670e-4b6f-b373-abdfd1581032\") " pod="kube-system/coredns-5dd5756b68-bg8wn" Aug 5 22:08:16.386399 kubelet[2669]: I0805 22:08:16.386352 2669 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af2ff405-41a2-41af-be9d-df5d15c0abae-tigera-ca-bundle\") pod \"calico-kube-controllers-fb8459779-ndctr\" (UID: \"af2ff405-41a2-41af-be9d-df5d15c0abae\") " pod="calico-system/calico-kube-controllers-fb8459779-ndctr" Aug 5 22:08:16.424649 containerd[1548]: time="2024-08-05T22:08:16.424549939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gh6b,Uid:a2005850-2566-453e-9840-897b314819a1,Namespace:calico-system,Attempt:0,}" Aug 5 22:08:16.505578 kubelet[2669]: E0805 22:08:16.502675 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:16.506255 containerd[1548]: time="2024-08-05T22:08:16.505600311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:08:16.513310 containerd[1548]: time="2024-08-05T22:08:16.513270323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb8459779-ndctr,Uid:af2ff405-41a2-41af-be9d-df5d15c0abae,Namespace:calico-system,Attempt:0,}" Aug 5 22:08:16.517405 kubelet[2669]: E0805 22:08:16.515798 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:16.517515 containerd[1548]: time="2024-08-05T22:08:16.516830242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bg8wn,Uid:d1686794-670e-4b6f-b373-abdfd1581032,Namespace:kube-system,Attempt:0,}" Aug 5 22:08:16.518737 kubelet[2669]: E0805 22:08:16.518659 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:16.519079 containerd[1548]: time="2024-08-05T22:08:16.518981130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-swc99,Uid:ed3c9326-5dc5-42ed-b57b-01a76d3c682c,Namespace:kube-system,Attempt:0,}" Aug 5 22:08:16.595091 containerd[1548]: time="2024-08-05T22:08:16.594650862Z" level=error msg="Failed to destroy network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.595091 containerd[1548]: time="2024-08-05T22:08:16.594969949Z" level=error msg="encountered an error cleaning up failed sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.595091 containerd[1548]: time="2024-08-05T22:08:16.595013590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gh6b,Uid:a2005850-2566-453e-9840-897b314819a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.595482 kubelet[2669]: E0805 22:08:16.595245 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.595482 kubelet[2669]: E0805 22:08:16.595297 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8gh6b" Aug 5 22:08:16.595482 kubelet[2669]: E0805 22:08:16.595316 2669 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8gh6b" Aug 5 22:08:16.596219 kubelet[2669]: E0805 22:08:16.595361 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8gh6b_calico-system(a2005850-2566-453e-9840-897b314819a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8gh6b_calico-system(a2005850-2566-453e-9840-897b314819a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8gh6b" podUID="a2005850-2566-453e-9840-897b314819a1" Aug 5 22:08:16.601234 containerd[1548]: time="2024-08-05T22:08:16.601173768Z" level=error msg="Failed to destroy network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.604312 containerd[1548]: time="2024-08-05T22:08:16.604276957Z" level=error msg="encountered an error cleaning up failed sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.604378 containerd[1548]: time="2024-08-05T22:08:16.604333558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb8459779-ndctr,Uid:af2ff405-41a2-41af-be9d-df5d15c0abae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.604724 kubelet[2669]: E0805 22:08:16.604694 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.604788 kubelet[2669]: E0805 22:08:16.604744 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fb8459779-ndctr" Aug 5 22:08:16.604788 kubelet[2669]: E0805 22:08:16.604764 2669 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-fb8459779-ndctr" Aug 5 22:08:16.604846 kubelet[2669]: E0805 22:08:16.604806 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-fb8459779-ndctr_calico-system(af2ff405-41a2-41af-be9d-df5d15c0abae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-fb8459779-ndctr_calico-system(af2ff405-41a2-41af-be9d-df5d15c0abae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fb8459779-ndctr" podUID="af2ff405-41a2-41af-be9d-df5d15c0abae" Aug 5 22:08:16.610727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3-shm.mount: Deactivated successfully. Aug 5 22:08:16.613273 containerd[1548]: time="2024-08-05T22:08:16.613231837Z" level=error msg="Failed to destroy network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.613750 containerd[1548]: time="2024-08-05T22:08:16.613723648Z" level=error msg="encountered an error cleaning up failed sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.613874 containerd[1548]: time="2024-08-05T22:08:16.613851731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bg8wn,Uid:d1686794-670e-4b6f-b373-abdfd1581032,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.615602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17-shm.mount: Deactivated successfully. Aug 5 22:08:16.615978 kubelet[2669]: E0805 22:08:16.615956 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.616041 kubelet[2669]: E0805 22:08:16.616004 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-bg8wn" Aug 5 22:08:16.616041 kubelet[2669]: E0805 22:08:16.616024 2669 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-bg8wn" Aug 5 22:08:16.616104 kubelet[2669]: E0805 22:08:16.616076 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-bg8wn_kube-system(d1686794-670e-4b6f-b373-abdfd1581032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-bg8wn_kube-system(d1686794-670e-4b6f-b373-abdfd1581032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-bg8wn" podUID="d1686794-670e-4b6f-b373-abdfd1581032" Aug 5 22:08:16.631382 containerd[1548]: time="2024-08-05T22:08:16.631338522Z" level=error msg="Failed to destroy network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.633258 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885-shm.mount: Deactivated successfully. Aug 5 22:08:16.633613 containerd[1548]: time="2024-08-05T22:08:16.633350767Z" level=error msg="encountered an error cleaning up failed sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.633613 containerd[1548]: time="2024-08-05T22:08:16.633400768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-swc99,Uid:ed3c9326-5dc5-42ed-b57b-01a76d3c682c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.633679 kubelet[2669]: E0805 22:08:16.633624 2669 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:16.633679 kubelet[2669]: E0805 22:08:16.633671 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-swc99" Aug 5 22:08:16.633740 kubelet[2669]: E0805 22:08:16.633690 2669 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-swc99" Aug 5 22:08:16.633762 kubelet[2669]: E0805 22:08:16.633743 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-swc99_kube-system(ed3c9326-5dc5-42ed-b57b-01a76d3c682c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-swc99_kube-system(ed3c9326-5dc5-42ed-b57b-01a76d3c682c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-swc99" podUID="ed3c9326-5dc5-42ed-b57b-01a76d3c682c" Aug 5 22:08:17.513016 kubelet[2669]: I0805 22:08:17.512986 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:17.517951 kubelet[2669]: I0805 22:08:17.517181 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:17.520843 containerd[1548]: time="2024-08-05T22:08:17.520058358Z" level=info msg="StopPodSandbox for \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\"" Aug 5 22:08:17.520967 kubelet[2669]: I0805 22:08:17.520364 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:17.521365 containerd[1548]: time="2024-08-05T22:08:17.520158241Z" level=info msg="StopPodSandbox for \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\"" Aug 5 22:08:17.521365 containerd[1548]: time="2024-08-05T22:08:17.521221583Z" level=info msg="StopPodSandbox for \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\"" Aug 5 22:08:17.522637 kubelet[2669]: I0805 22:08:17.522621 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:17.526097 containerd[1548]: time="2024-08-05T22:08:17.524999745Z" level=info msg="StopPodSandbox for \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\"" Aug 5 22:08:17.529286 containerd[1548]: time="2024-08-05T22:08:17.529240956Z" level=info msg="Ensure that sandbox 1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17 in task-service has been cleanup successfully" Aug 5 22:08:17.531119 containerd[1548]: time="2024-08-05T22:08:17.530368140Z" level=info msg="Ensure that sandbox 0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885 in task-service has been cleanup successfully" Aug 5 22:08:17.532077 containerd[1548]: time="2024-08-05T22:08:17.529251716Z" level=info msg="Ensure that sandbox 6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734 in task-service has been cleanup successfully" Aug 5 22:08:17.552072 containerd[1548]: time="2024-08-05T22:08:17.547922718Z" level=info msg="Ensure that sandbox 51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3 in task-service has been cleanup successfully" Aug 5 22:08:17.569685 containerd[1548]: time="2024-08-05T22:08:17.569632425Z" level=error msg="StopPodSandbox for \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\" failed" error="failed to destroy network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:17.570217 kubelet[2669]: E0805 22:08:17.570186 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:17.570307 kubelet[2669]: E0805 22:08:17.570270 2669 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885"} Aug 5 22:08:17.570307 kubelet[2669]: E0805 22:08:17.570307 2669 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed3c9326-5dc5-42ed-b57b-01a76d3c682c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:08:17.570383 kubelet[2669]: E0805 22:08:17.570335 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed3c9326-5dc5-42ed-b57b-01a76d3c682c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-swc99" podUID="ed3c9326-5dc5-42ed-b57b-01a76d3c682c" Aug 5 22:08:17.571971 containerd[1548]: time="2024-08-05T22:08:17.571935275Z" level=error msg="StopPodSandbox for \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\" failed" error="failed to destroy network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:17.572298 kubelet[2669]: E0805 22:08:17.572280 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:17.572365 kubelet[2669]: E0805 22:08:17.572304 2669 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17"} Aug 5 22:08:17.572365 kubelet[2669]: E0805 22:08:17.572331 2669 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d1686794-670e-4b6f-b373-abdfd1581032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:08:17.572365 kubelet[2669]: E0805 22:08:17.572354 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d1686794-670e-4b6f-b373-abdfd1581032\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-bg8wn" podUID="d1686794-670e-4b6f-b373-abdfd1581032" Aug 5 22:08:17.645046 containerd[1548]: time="2024-08-05T22:08:17.644999647Z" level=error msg="StopPodSandbox for \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\" failed" error="failed to destroy network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:17.645737 kubelet[2669]: E0805 22:08:17.645690 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:17.645737 kubelet[2669]: E0805 22:08:17.645740 2669 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734"} Aug 5 22:08:17.645843 kubelet[2669]: E0805 22:08:17.645775 2669 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2005850-2566-453e-9840-897b314819a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:08:17.645843 kubelet[2669]: E0805 22:08:17.645807 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2005850-2566-453e-9840-897b314819a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8gh6b" podUID="a2005850-2566-453e-9840-897b314819a1" Aug 5 22:08:17.662786 containerd[1548]: time="2024-08-05T22:08:17.662619066Z" level=error msg="StopPodSandbox for \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\" failed" error="failed to destroy network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:08:17.662959 kubelet[2669]: E0805 22:08:17.662932 2669 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:17.663006 kubelet[2669]: E0805 22:08:17.662978 2669 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3"} Aug 5 22:08:17.663029 kubelet[2669]: E0805 22:08:17.663017 2669 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af2ff405-41a2-41af-be9d-df5d15c0abae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:08:17.663105 kubelet[2669]: E0805 22:08:17.663048 2669 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af2ff405-41a2-41af-be9d-df5d15c0abae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-fb8459779-ndctr" podUID="af2ff405-41a2-41af-be9d-df5d15c0abae" Aug 5 22:08:19.336520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875255972.mount: Deactivated successfully. Aug 5 22:08:19.409423 containerd[1548]: time="2024-08-05T22:08:19.409046206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:19.409805 containerd[1548]: time="2024-08-05T22:08:19.409536416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 22:08:19.410440 containerd[1548]: time="2024-08-05T22:08:19.410393513Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:19.412648 containerd[1548]: time="2024-08-05T22:08:19.412594357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:19.413563 containerd[1548]: time="2024-08-05T22:08:19.413111687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 2.907450655s" Aug 5 22:08:19.413563 containerd[1548]: time="2024-08-05T22:08:19.413144248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 22:08:19.420520 containerd[1548]: time="2024-08-05T22:08:19.420477914Z" level=info msg="CreateContainer within sandbox \"52c8e0de2e7575aa15686ecd3e1b18634bf0b68a95c2349ab183dcc5d9c67884\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:08:19.436511 containerd[1548]: time="2024-08-05T22:08:19.436408593Z" level=info msg="CreateContainer within sandbox \"52c8e0de2e7575aa15686ecd3e1b18634bf0b68a95c2349ab183dcc5d9c67884\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"db46178ac5a246cd927ccd9e1dead375c3580384eff43033ccb0ddbe091db603\"" Aug 5 22:08:19.436988 containerd[1548]: time="2024-08-05T22:08:19.436963204Z" level=info msg="StartContainer for \"db46178ac5a246cd927ccd9e1dead375c3580384eff43033ccb0ddbe091db603\"" Aug 5 22:08:19.504086 containerd[1548]: time="2024-08-05T22:08:19.504022665Z" level=info msg="StartContainer for \"db46178ac5a246cd927ccd9e1dead375c3580384eff43033ccb0ddbe091db603\" returns successfully" Aug 5 22:08:19.535040 kubelet[2669]: E0805 22:08:19.534190 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:19.555821 kubelet[2669]: I0805 22:08:19.555774 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-jz2fc" podStartSLOduration=0.592731664 podCreationTimestamp="2024-08-05 22:08:10 +0000 UTC" firstStartedPulling="2024-08-05 22:08:10.452578541 +0000 UTC m=+21.119347546" lastFinishedPulling="2024-08-05 22:08:19.413321771 +0000 UTC m=+30.080090776" observedRunningTime="2024-08-05 22:08:19.551199089 +0000 UTC m=+30.217968134" watchObservedRunningTime="2024-08-05 22:08:19.553474894 +0000 UTC m=+30.220243899" Aug 5 22:08:19.669299 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:08:19.669430 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:08:20.536486 kubelet[2669]: I0805 22:08:20.536431 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:08:20.537993 kubelet[2669]: E0805 22:08:20.537864 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:21.187964 systemd-networkd[1244]: vxlan.calico: Link UP Aug 5 22:08:21.187972 systemd-networkd[1244]: vxlan.calico: Gained carrier Aug 5 22:08:22.475217 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Aug 5 22:08:25.896332 systemd[1]: Started sshd@7-10.0.0.66:22-10.0.0.1:35096.service - OpenSSH per-connection server daemon (10.0.0.1:35096). Aug 5 22:08:25.947498 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 35096 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:25.948936 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:25.952509 systemd-logind[1525]: New session 8 of user core. Aug 5 22:08:25.961274 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:08:26.083397 sshd[3920]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:26.086540 systemd[1]: sshd@7-10.0.0.66:22-10.0.0.1:35096.service: Deactivated successfully. Aug 5 22:08:26.088589 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:08:26.088593 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:08:26.090025 systemd-logind[1525]: Removed session 8. Aug 5 22:08:30.422415 containerd[1548]: time="2024-08-05T22:08:30.422317158Z" level=info msg="StopPodSandbox for \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\"" Aug 5 22:08:30.422415 containerd[1548]: time="2024-08-05T22:08:30.422396000Z" level=info msg="StopPodSandbox for \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\"" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.500 [INFO][3978] k8s.go 608: Cleaning up netns ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.500 [INFO][3978] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" iface="eth0" netns="/var/run/netns/cni-3ccfc632-a1d1-9752-2186-38c2ee62735b" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.500 [INFO][3978] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" iface="eth0" netns="/var/run/netns/cni-3ccfc632-a1d1-9752-2186-38c2ee62735b" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.501 [INFO][3978] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" iface="eth0" netns="/var/run/netns/cni-3ccfc632-a1d1-9752-2186-38c2ee62735b" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.501 [INFO][3978] k8s.go 615: Releasing IP address(es) ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.501 [INFO][3978] utils.go 188: Calico CNI releasing IP address ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.656 [INFO][3992] ipam_plugin.go 411: Releasing address using handleID ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.656 [INFO][3992] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.656 [INFO][3992] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.667 [WARNING][3992] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.667 [INFO][3992] ipam_plugin.go 439: Releasing address using workloadID ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.668 [INFO][3992] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:30.671992 containerd[1548]: 2024-08-05 22:08:30.670 [INFO][3978] k8s.go 621: Teardown processing complete. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:30.672523 containerd[1548]: time="2024-08-05T22:08:30.672136394Z" level=info msg="TearDown network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\" successfully" Aug 5 22:08:30.672523 containerd[1548]: time="2024-08-05T22:08:30.672170394Z" level=info msg="StopPodSandbox for \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\" returns successfully" Aug 5 22:08:30.674331 systemd[1]: run-netns-cni\x2d3ccfc632\x2da1d1\x2d9752\x2d2186\x2d38c2ee62735b.mount: Deactivated successfully. Aug 5 22:08:30.676304 containerd[1548]: time="2024-08-05T22:08:30.674927994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb8459779-ndctr,Uid:af2ff405-41a2-41af-be9d-df5d15c0abae,Namespace:calico-system,Attempt:1,}" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.505 [INFO][3977] k8s.go 608: Cleaning up netns ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.505 [INFO][3977] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" iface="eth0" netns="/var/run/netns/cni-428a30ee-ebf5-cd2b-3306-f5962f966c9d" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.506 [INFO][3977] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" iface="eth0" netns="/var/run/netns/cni-428a30ee-ebf5-cd2b-3306-f5962f966c9d" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.506 [INFO][3977] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" iface="eth0" netns="/var/run/netns/cni-428a30ee-ebf5-cd2b-3306-f5962f966c9d" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.506 [INFO][3977] k8s.go 615: Releasing IP address(es) ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.506 [INFO][3977] utils.go 188: Calico CNI releasing IP address ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.656 [INFO][3993] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.656 [INFO][3993] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.668 [INFO][3993] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.680 [WARNING][3993] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.680 [INFO][3993] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.682 [INFO][3993] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:30.686780 containerd[1548]: 2024-08-05 22:08:30.684 [INFO][3977] k8s.go 621: Teardown processing complete. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:30.687131 containerd[1548]: time="2024-08-05T22:08:30.687051088Z" level=info msg="TearDown network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\" successfully" Aug 5 22:08:30.687131 containerd[1548]: time="2024-08-05T22:08:30.687091969Z" level=info msg="StopPodSandbox for \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\" returns successfully" Aug 5 22:08:30.687387 kubelet[2669]: E0805 22:08:30.687364 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:30.687828 containerd[1548]: time="2024-08-05T22:08:30.687806899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bg8wn,Uid:d1686794-670e-4b6f-b373-abdfd1581032,Namespace:kube-system,Attempt:1,}" Aug 5 22:08:30.690747 systemd[1]: run-netns-cni\x2d428a30ee\x2debf5\x2dcd2b\x2d3306\x2df5962f966c9d.mount: Deactivated successfully. Aug 5 22:08:30.810081 systemd-networkd[1244]: cali67ea92ce7eb: Link UP Aug 5 22:08:30.810283 systemd-networkd[1244]: cali67ea92ce7eb: Gained carrier Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.731 [INFO][4009] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0 calico-kube-controllers-fb8459779- calico-system af2ff405-41a2-41af-be9d-df5d15c0abae 756 0 2024-08-05 22:08:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:fb8459779 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-fb8459779-ndctr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali67ea92ce7eb [] []}} ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.731 [INFO][4009] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.758 [INFO][4035] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" HandleID="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.775 [INFO][4035] ipam_plugin.go 264: Auto assigning IP ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" HandleID="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dc60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-fb8459779-ndctr", "timestamp":"2024-08-05 22:08:30.758189192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.775 [INFO][4035] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.775 [INFO][4035] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.775 [INFO][4035] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.778 [INFO][4035] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.786 [INFO][4035] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.790 [INFO][4035] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.791 [INFO][4035] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.793 [INFO][4035] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.794 [INFO][4035] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.795 [INFO][4035] ipam.go 1685: Creating new handle: k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.798 [INFO][4035] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.802 [INFO][4035] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.803 [INFO][4035] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" host="localhost" Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.803 [INFO][4035] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:30.822584 containerd[1548]: 2024-08-05 22:08:30.803 [INFO][4035] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" HandleID="k8s-pod-network.8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.823519 containerd[1548]: 2024-08-05 22:08:30.805 [INFO][4009] k8s.go 386: Populated endpoint ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0", GenerateName:"calico-kube-controllers-fb8459779-", Namespace:"calico-system", SelfLink:"", UID:"af2ff405-41a2-41af-be9d-df5d15c0abae", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb8459779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-fb8459779-ndctr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67ea92ce7eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:30.823519 containerd[1548]: 2024-08-05 22:08:30.805 [INFO][4009] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.823519 containerd[1548]: 2024-08-05 22:08:30.805 [INFO][4009] dataplane_linux.go 68: Setting the host side veth name to cali67ea92ce7eb ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.823519 containerd[1548]: 2024-08-05 22:08:30.810 [INFO][4009] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.823519 containerd[1548]: 2024-08-05 22:08:30.812 [INFO][4009] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0", GenerateName:"calico-kube-controllers-fb8459779-", Namespace:"calico-system", SelfLink:"", UID:"af2ff405-41a2-41af-be9d-df5d15c0abae", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb8459779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e", Pod:"calico-kube-controllers-fb8459779-ndctr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67ea92ce7eb", MAC:"46:99:e3:84:b8:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:30.823519 containerd[1548]: 2024-08-05 22:08:30.821 [INFO][4009] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e" Namespace="calico-system" Pod="calico-kube-controllers-fb8459779-ndctr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:30.845085 systemd-networkd[1244]: cali3faabd16240: Link UP Aug 5 22:08:30.845458 systemd-networkd[1244]: cali3faabd16240: Gained carrier Aug 5 22:08:30.854248 containerd[1548]: time="2024-08-05T22:08:30.854127012Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:30.854248 containerd[1548]: time="2024-08-05T22:08:30.854201734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:30.854428 containerd[1548]: time="2024-08-05T22:08:30.854221534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:30.854428 containerd[1548]: time="2024-08-05T22:08:30.854239134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.740 [INFO][4020] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--bg8wn-eth0 coredns-5dd5756b68- kube-system d1686794-670e-4b6f-b373-abdfd1581032 757 0 2024-08-05 22:08:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-bg8wn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3faabd16240 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.740 [INFO][4020] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.767 [INFO][4040] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" HandleID="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.778 [INFO][4040] ipam_plugin.go 264: Auto assigning IP ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" HandleID="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027a380), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-bg8wn", "timestamp":"2024-08-05 22:08:30.767035239 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.778 [INFO][4040] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.803 [INFO][4040] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.803 [INFO][4040] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.805 [INFO][4040] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.809 [INFO][4040] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.816 [INFO][4040] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.818 [INFO][4040] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.824 [INFO][4040] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.824 [INFO][4040] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.828 [INFO][4040] ipam.go 1685: Creating new handle: k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.832 [INFO][4040] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.837 [INFO][4040] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.837 [INFO][4040] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" host="localhost" Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.837 [INFO][4040] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:30.863858 containerd[1548]: 2024-08-05 22:08:30.837 [INFO][4040] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" HandleID="k8s-pod-network.4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.865481 containerd[1548]: 2024-08-05 22:08:30.840 [INFO][4020] k8s.go 386: Populated endpoint ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bg8wn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d1686794-670e-4b6f-b373-abdfd1581032", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-bg8wn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3faabd16240", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:30.865481 containerd[1548]: 2024-08-05 22:08:30.840 [INFO][4020] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.865481 containerd[1548]: 2024-08-05 22:08:30.840 [INFO][4020] dataplane_linux.go 68: Setting the host side veth name to cali3faabd16240 ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.865481 containerd[1548]: 2024-08-05 22:08:30.846 [INFO][4020] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.865481 containerd[1548]: 2024-08-05 22:08:30.847 [INFO][4020] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bg8wn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d1686794-670e-4b6f-b373-abdfd1581032", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c", Pod:"coredns-5dd5756b68-bg8wn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3faabd16240", MAC:"c6:a8:92:52:6b:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:30.865481 containerd[1548]: 2024-08-05 22:08:30.856 [INFO][4020] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c" Namespace="kube-system" Pod="coredns-5dd5756b68-bg8wn" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:30.893106 containerd[1548]: time="2024-08-05T22:08:30.891761514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:30.893106 containerd[1548]: time="2024-08-05T22:08:30.891839875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:30.893106 containerd[1548]: time="2024-08-05T22:08:30.891854435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:30.893106 containerd[1548]: time="2024-08-05T22:08:30.891864396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:30.912435 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:08:30.916857 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:08:30.937712 containerd[1548]: time="2024-08-05T22:08:30.937606014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-fb8459779-ndctr,Uid:af2ff405-41a2-41af-be9d-df5d15c0abae,Namespace:calico-system,Attempt:1,} returns sandbox id \"8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e\"" Aug 5 22:08:30.937712 containerd[1548]: time="2024-08-05T22:08:30.937618814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-bg8wn,Uid:d1686794-670e-4b6f-b373-abdfd1581032,Namespace:kube-system,Attempt:1,} returns sandbox id \"4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c\"" Aug 5 22:08:30.939100 kubelet[2669]: E0805 22:08:30.938393 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:30.940584 containerd[1548]: time="2024-08-05T22:08:30.940533736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:08:30.943257 containerd[1548]: time="2024-08-05T22:08:30.943199574Z" level=info msg="CreateContainer within sandbox \"4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:08:30.956306 containerd[1548]: time="2024-08-05T22:08:30.956258082Z" level=info msg="CreateContainer within sandbox \"4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"65bb7ee335796e5d6c682337ed91fdf7f6754ddcf621836387bfb522945df196\"" Aug 5 22:08:30.956853 containerd[1548]: time="2024-08-05T22:08:30.956804250Z" level=info msg="StartContainer for \"65bb7ee335796e5d6c682337ed91fdf7f6754ddcf621836387bfb522945df196\"" Aug 5 22:08:30.996324 containerd[1548]: time="2024-08-05T22:08:30.996273458Z" level=info msg="StartContainer for \"65bb7ee335796e5d6c682337ed91fdf7f6754ddcf621836387bfb522945df196\" returns successfully" Aug 5 22:08:31.093278 systemd[1]: Started sshd@8-10.0.0.66:22-10.0.0.1:35098.service - OpenSSH per-connection server daemon (10.0.0.1:35098). Aug 5 22:08:31.132765 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 35098 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:31.134161 sshd[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:31.137949 systemd-logind[1525]: New session 9 of user core. Aug 5 22:08:31.144375 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:08:31.277590 sshd[4198]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:31.281197 systemd[1]: sshd@8-10.0.0.66:22-10.0.0.1:35098.service: Deactivated successfully. Aug 5 22:08:31.283114 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:08:31.283271 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:08:31.284443 systemd-logind[1525]: Removed session 9. Aug 5 22:08:31.422918 containerd[1548]: time="2024-08-05T22:08:31.422873694Z" level=info msg="StopPodSandbox for \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\"" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.472 [INFO][4243] k8s.go 608: Cleaning up netns ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.472 [INFO][4243] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" iface="eth0" netns="/var/run/netns/cni-58bd2303-0aa8-cf42-249b-81614666796c" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.474 [INFO][4243] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" iface="eth0" netns="/var/run/netns/cni-58bd2303-0aa8-cf42-249b-81614666796c" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.475 [INFO][4243] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" iface="eth0" netns="/var/run/netns/cni-58bd2303-0aa8-cf42-249b-81614666796c" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.475 [INFO][4243] k8s.go 615: Releasing IP address(es) ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.475 [INFO][4243] utils.go 188: Calico CNI releasing IP address ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.497 [INFO][4250] ipam_plugin.go 411: Releasing address using handleID ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.497 [INFO][4250] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.497 [INFO][4250] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.507 [WARNING][4250] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.507 [INFO][4250] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.509 [INFO][4250] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:31.513262 containerd[1548]: 2024-08-05 22:08:31.511 [INFO][4243] k8s.go 621: Teardown processing complete. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:31.513777 containerd[1548]: time="2024-08-05T22:08:31.513359366Z" level=info msg="TearDown network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\" successfully" Aug 5 22:08:31.513777 containerd[1548]: time="2024-08-05T22:08:31.513388646Z" level=info msg="StopPodSandbox for \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\" returns successfully" Aug 5 22:08:31.514684 containerd[1548]: time="2024-08-05T22:08:31.514145857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gh6b,Uid:a2005850-2566-453e-9840-897b314819a1,Namespace:calico-system,Attempt:1,}" Aug 5 22:08:31.566150 kubelet[2669]: E0805 22:08:31.565683 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:31.578377 kubelet[2669]: I0805 22:08:31.578339 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-bg8wn" podStartSLOduration=27.578301678 podCreationTimestamp="2024-08-05 22:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:08:31.577639149 +0000 UTC m=+42.244408154" watchObservedRunningTime="2024-08-05 22:08:31.578301678 +0000 UTC m=+42.245070683" Aug 5 22:08:31.649208 systemd-networkd[1244]: cali0561a77cdcb: Link UP Aug 5 22:08:31.649994 systemd-networkd[1244]: cali0561a77cdcb: Gained carrier Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.570 [INFO][4259] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8gh6b-eth0 csi-node-driver- calico-system a2005850-2566-453e-9840-897b314819a1 774 0 2024-08-05 22:08:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-8gh6b eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali0561a77cdcb [] []}} ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.570 [INFO][4259] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.606 [INFO][4272] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" HandleID="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.618 [INFO][4272] ipam_plugin.go 264: Auto assigning IP ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" HandleID="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8gh6b", "timestamp":"2024-08-05 22:08:31.60687716 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.618 [INFO][4272] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.619 [INFO][4272] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.619 [INFO][4272] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.622 [INFO][4272] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.626 [INFO][4272] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.631 [INFO][4272] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.632 [INFO][4272] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.634 [INFO][4272] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.634 [INFO][4272] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.636 [INFO][4272] ipam.go 1685: Creating new handle: k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1 Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.639 [INFO][4272] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.644 [INFO][4272] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.644 [INFO][4272] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" host="localhost" Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.644 [INFO][4272] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:31.669797 containerd[1548]: 2024-08-05 22:08:31.644 [INFO][4272] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" HandleID="k8s-pod-network.03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.670430 containerd[1548]: 2024-08-05 22:08:31.646 [INFO][4259] k8s.go 386: Populated endpoint ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gh6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2005850-2566-453e-9840-897b314819a1", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8gh6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0561a77cdcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:31.670430 containerd[1548]: 2024-08-05 22:08:31.647 [INFO][4259] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.670430 containerd[1548]: 2024-08-05 22:08:31.647 [INFO][4259] dataplane_linux.go 68: Setting the host side veth name to cali0561a77cdcb ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.670430 containerd[1548]: 2024-08-05 22:08:31.649 [INFO][4259] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.670430 containerd[1548]: 2024-08-05 22:08:31.650 [INFO][4259] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gh6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2005850-2566-453e-9840-897b314819a1", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1", Pod:"csi-node-driver-8gh6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0561a77cdcb", MAC:"c6:85:40:b1:ad:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:31.670430 containerd[1548]: 2024-08-05 22:08:31.660 [INFO][4259] k8s.go 500: Wrote updated endpoint to datastore ContainerID="03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1" Namespace="calico-system" Pod="csi-node-driver-8gh6b" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:31.678756 systemd[1]: run-netns-cni\x2d58bd2303\x2d0aa8\x2dcf42\x2d249b\x2d81614666796c.mount: Deactivated successfully. Aug 5 22:08:31.689388 containerd[1548]: time="2024-08-05T22:08:31.689288958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:31.689388 containerd[1548]: time="2024-08-05T22:08:31.689349919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:31.689388 containerd[1548]: time="2024-08-05T22:08:31.689364399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:31.689388 containerd[1548]: time="2024-08-05T22:08:31.689373519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:31.715019 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:08:31.724873 containerd[1548]: time="2024-08-05T22:08:31.724835257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gh6b,Uid:a2005850-2566-453e-9840-897b314819a1,Namespace:calico-system,Attempt:1,} returns sandbox id \"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1\"" Aug 5 22:08:32.219382 containerd[1548]: time="2024-08-05T22:08:32.219327136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:32.219866 containerd[1548]: time="2024-08-05T22:08:32.219840583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 22:08:32.220930 containerd[1548]: time="2024-08-05T22:08:32.220902918Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:32.223549 containerd[1548]: time="2024-08-05T22:08:32.223511874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:32.224516 containerd[1548]: time="2024-08-05T22:08:32.224388926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.28381419s" Aug 5 22:08:32.224516 containerd[1548]: time="2024-08-05T22:08:32.224426846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 22:08:32.226201 containerd[1548]: time="2024-08-05T22:08:32.225960547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:08:32.232526 containerd[1548]: time="2024-08-05T22:08:32.232497357Z" level=info msg="CreateContainer within sandbox \"8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:08:32.243974 containerd[1548]: time="2024-08-05T22:08:32.243929434Z" level=info msg="CreateContainer within sandbox \"8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5df7d018a4a81767fba546551abb161b0331aec88c7de3754cae38ad5086303a\"" Aug 5 22:08:32.245162 containerd[1548]: time="2024-08-05T22:08:32.244980809Z" level=info msg="StartContainer for \"5df7d018a4a81767fba546551abb161b0331aec88c7de3754cae38ad5086303a\"" Aug 5 22:08:32.321110 containerd[1548]: time="2024-08-05T22:08:32.320760169Z" level=info msg="StartContainer for \"5df7d018a4a81767fba546551abb161b0331aec88c7de3754cae38ad5086303a\" returns successfully" Aug 5 22:08:32.331181 systemd-networkd[1244]: cali3faabd16240: Gained IPv6LL Aug 5 22:08:32.425632 containerd[1548]: time="2024-08-05T22:08:32.425185523Z" level=info msg="StopPodSandbox for \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\"" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.492 [INFO][4398] k8s.go 608: Cleaning up netns ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.492 [INFO][4398] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" iface="eth0" netns="/var/run/netns/cni-8911701d-ea3e-7a7d-7354-e04b79c5c7c8" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.493 [INFO][4398] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" iface="eth0" netns="/var/run/netns/cni-8911701d-ea3e-7a7d-7354-e04b79c5c7c8" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.493 [INFO][4398] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" iface="eth0" netns="/var/run/netns/cni-8911701d-ea3e-7a7d-7354-e04b79c5c7c8" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.493 [INFO][4398] k8s.go 615: Releasing IP address(es) ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.493 [INFO][4398] utils.go 188: Calico CNI releasing IP address ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.533 [INFO][4406] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.533 [INFO][4406] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.533 [INFO][4406] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.543 [WARNING][4406] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.543 [INFO][4406] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.545 [INFO][4406] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:32.550612 containerd[1548]: 2024-08-05 22:08:32.548 [INFO][4398] k8s.go 621: Teardown processing complete. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:32.551569 containerd[1548]: time="2024-08-05T22:08:32.551117093Z" level=info msg="TearDown network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\" successfully" Aug 5 22:08:32.551569 containerd[1548]: time="2024-08-05T22:08:32.551156454Z" level=info msg="StopPodSandbox for \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\" returns successfully" Aug 5 22:08:32.552316 kubelet[2669]: E0805 22:08:32.552292 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:32.554009 containerd[1548]: time="2024-08-05T22:08:32.553973412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-swc99,Uid:ed3c9326-5dc5-42ed-b57b-01a76d3c682c,Namespace:kube-system,Attempt:1,}" Aug 5 22:08:32.578258 kubelet[2669]: E0805 22:08:32.576910 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:32.583398 kubelet[2669]: I0805 22:08:32.583373 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-fb8459779-ndctr" podStartSLOduration=21.298751015 podCreationTimestamp="2024-08-05 22:08:10 +0000 UTC" firstStartedPulling="2024-08-05 22:08:30.940247252 +0000 UTC m=+41.607016257" lastFinishedPulling="2024-08-05 22:08:32.224818572 +0000 UTC m=+42.891587577" observedRunningTime="2024-08-05 22:08:32.582728047 +0000 UTC m=+43.249497052" watchObservedRunningTime="2024-08-05 22:08:32.583322335 +0000 UTC m=+43.250091340" Aug 5 22:08:32.589236 systemd-networkd[1244]: cali67ea92ce7eb: Gained IPv6LL Aug 5 22:08:32.677346 systemd[1]: run-netns-cni\x2d8911701d\x2dea3e\x2d7a7d\x2d7354\x2de04b79c5c7c8.mount: Deactivated successfully. Aug 5 22:08:32.728659 systemd-networkd[1244]: cali35125f6cd0a: Link UP Aug 5 22:08:32.729206 systemd-networkd[1244]: cali35125f6cd0a: Gained carrier Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.640 [INFO][4418] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--swc99-eth0 coredns-5dd5756b68- kube-system ed3c9326-5dc5-42ed-b57b-01a76d3c682c 801 0 2024-08-05 22:08:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-swc99 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35125f6cd0a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.640 [INFO][4418] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.679 [INFO][4442] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" HandleID="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.692 [INFO][4442] ipam_plugin.go 264: Auto assigning IP ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" HandleID="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000128580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-swc99", "timestamp":"2024-08-05 22:08:32.67975834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.692 [INFO][4442] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.692 [INFO][4442] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.692 [INFO][4442] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.694 [INFO][4442] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.700 [INFO][4442] ipam.go 372: Looking up existing affinities for host host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.705 [INFO][4442] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.707 [INFO][4442] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.709 [INFO][4442] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.709 [INFO][4442] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.711 [INFO][4442] ipam.go 1685: Creating new handle: k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908 Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.715 [INFO][4442] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.721 [INFO][4442] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.721 [INFO][4442] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" host="localhost" Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.721 [INFO][4442] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:32.746168 containerd[1548]: 2024-08-05 22:08:32.721 [INFO][4442] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" HandleID="k8s-pod-network.124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.746685 containerd[1548]: 2024-08-05 22:08:32.724 [INFO][4418] k8s.go 386: Populated endpoint ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--swc99-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ed3c9326-5dc5-42ed-b57b-01a76d3c682c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-swc99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35125f6cd0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:32.746685 containerd[1548]: 2024-08-05 22:08:32.724 [INFO][4418] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.746685 containerd[1548]: 2024-08-05 22:08:32.724 [INFO][4418] dataplane_linux.go 68: Setting the host side veth name to cali35125f6cd0a ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.746685 containerd[1548]: 2024-08-05 22:08:32.729 [INFO][4418] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.746685 containerd[1548]: 2024-08-05 22:08:32.729 [INFO][4418] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--swc99-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ed3c9326-5dc5-42ed-b57b-01a76d3c682c", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908", Pod:"coredns-5dd5756b68-swc99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35125f6cd0a", MAC:"26:53:ab:4d:9d:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:32.746685 containerd[1548]: 2024-08-05 22:08:32.742 [INFO][4418] k8s.go 500: Wrote updated endpoint to datastore ContainerID="124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908" Namespace="kube-system" Pod="coredns-5dd5756b68-swc99" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:32.769954 containerd[1548]: time="2024-08-05T22:08:32.769637374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:08:32.769954 containerd[1548]: time="2024-08-05T22:08:32.769695975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:32.769954 containerd[1548]: time="2024-08-05T22:08:32.769734175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:08:32.769954 containerd[1548]: time="2024-08-05T22:08:32.769748496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:08:32.796777 systemd-resolved[1451]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 5 22:08:32.818191 containerd[1548]: time="2024-08-05T22:08:32.814988917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-swc99,Uid:ed3c9326-5dc5-42ed-b57b-01a76d3c682c,Namespace:kube-system,Attempt:1,} returns sandbox id \"124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908\"" Aug 5 22:08:32.818317 kubelet[2669]: E0805 22:08:32.815659 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:32.819879 containerd[1548]: time="2024-08-05T22:08:32.819830023Z" level=info msg="CreateContainer within sandbox \"124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:08:32.846462 containerd[1548]: time="2024-08-05T22:08:32.846418549Z" level=info msg="CreateContainer within sandbox \"124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"492aeef8b7d68b5ba37631df38679017e7f99cfb8d25d8c3f0d9c30dd4c09126\"" Aug 5 22:08:32.847568 containerd[1548]: time="2024-08-05T22:08:32.846920875Z" level=info msg="StartContainer for \"492aeef8b7d68b5ba37631df38679017e7f99cfb8d25d8c3f0d9c30dd4c09126\"" Aug 5 22:08:32.889954 containerd[1548]: time="2024-08-05T22:08:32.889912506Z" level=info msg="StartContainer for \"492aeef8b7d68b5ba37631df38679017e7f99cfb8d25d8c3f0d9c30dd4c09126\" returns successfully" Aug 5 22:08:33.230817 containerd[1548]: time="2024-08-05T22:08:33.230766159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:33.231517 containerd[1548]: time="2024-08-05T22:08:33.231333966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 22:08:33.232476 containerd[1548]: time="2024-08-05T22:08:33.232419141Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:33.234838 containerd[1548]: time="2024-08-05T22:08:33.234537729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:33.235321 containerd[1548]: time="2024-08-05T22:08:33.235288619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.009294071s" Aug 5 22:08:33.235321 containerd[1548]: time="2024-08-05T22:08:33.235319300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 22:08:33.238953 containerd[1548]: time="2024-08-05T22:08:33.238920268Z" level=info msg="CreateContainer within sandbox \"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:08:33.248872 containerd[1548]: time="2024-08-05T22:08:33.248797401Z" level=info msg="CreateContainer within sandbox \"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4c8ed8f816193c7b9544608aa94c94242ab465bbe1c9de849c9152fc5171004c\"" Aug 5 22:08:33.250424 containerd[1548]: time="2024-08-05T22:08:33.249341928Z" level=info msg="StartContainer for \"4c8ed8f816193c7b9544608aa94c94242ab465bbe1c9de849c9152fc5171004c\"" Aug 5 22:08:33.299678 containerd[1548]: time="2024-08-05T22:08:33.299639964Z" level=info msg="StartContainer for \"4c8ed8f816193c7b9544608aa94c94242ab465bbe1c9de849c9152fc5171004c\" returns successfully" Aug 5 22:08:33.302250 containerd[1548]: time="2024-08-05T22:08:33.302219759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:08:33.582706 kubelet[2669]: E0805 22:08:33.582567 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:33.587015 kubelet[2669]: E0805 22:08:33.586981 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:33.594901 kubelet[2669]: I0805 22:08:33.594603 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-swc99" podStartSLOduration=29.594565687 podCreationTimestamp="2024-08-05 22:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:08:33.5926063 +0000 UTC m=+44.259375305" watchObservedRunningTime="2024-08-05 22:08:33.594565687 +0000 UTC m=+44.261334652" Aug 5 22:08:33.677083 systemd-networkd[1244]: cali0561a77cdcb: Gained IPv6LL Aug 5 22:08:33.948259 kubelet[2669]: I0805 22:08:33.948216 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:08:33.949441 kubelet[2669]: E0805 22:08:33.949308 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:33.995514 systemd-networkd[1244]: cali35125f6cd0a: Gained IPv6LL Aug 5 22:08:34.272493 containerd[1548]: time="2024-08-05T22:08:34.272336837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:34.273968 containerd[1548]: time="2024-08-05T22:08:34.273448932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 22:08:34.274341 containerd[1548]: time="2024-08-05T22:08:34.274291303Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:34.276625 containerd[1548]: time="2024-08-05T22:08:34.276376170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:08:34.277242 containerd[1548]: time="2024-08-05T22:08:34.277124900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 974.744419ms" Aug 5 22:08:34.277242 containerd[1548]: time="2024-08-05T22:08:34.277160381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 22:08:34.279889 containerd[1548]: time="2024-08-05T22:08:34.279854096Z" level=info msg="CreateContainer within sandbox \"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:08:34.292781 containerd[1548]: time="2024-08-05T22:08:34.292645744Z" level=info msg="CreateContainer within sandbox \"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cb127093fdb349484a386e61e725087a4bf09946f0b1ac23a4cb6d7038b86558\"" Aug 5 22:08:34.293156 containerd[1548]: time="2024-08-05T22:08:34.293131111Z" level=info msg="StartContainer for \"cb127093fdb349484a386e61e725087a4bf09946f0b1ac23a4cb6d7038b86558\"" Aug 5 22:08:34.350688 containerd[1548]: time="2024-08-05T22:08:34.350632107Z" level=info msg="StartContainer for \"cb127093fdb349484a386e61e725087a4bf09946f0b1ac23a4cb6d7038b86558\" returns successfully" Aug 5 22:08:34.519474 kubelet[2669]: I0805 22:08:34.519318 2669 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:08:34.519474 kubelet[2669]: I0805 22:08:34.519357 2669 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:08:34.592030 kubelet[2669]: E0805 22:08:34.591923 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:34.592397 kubelet[2669]: E0805 22:08:34.592176 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:34.602247 kubelet[2669]: I0805 22:08:34.601438 2669 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8gh6b" podStartSLOduration=22.050165658 podCreationTimestamp="2024-08-05 22:08:10 +0000 UTC" firstStartedPulling="2024-08-05 22:08:31.726176276 +0000 UTC m=+42.392945281" lastFinishedPulling="2024-08-05 22:08:34.277414904 +0000 UTC m=+44.944183869" observedRunningTime="2024-08-05 22:08:34.601272365 +0000 UTC m=+45.268041490" watchObservedRunningTime="2024-08-05 22:08:34.601404246 +0000 UTC m=+45.268173211" Aug 5 22:08:35.592877 kubelet[2669]: E0805 22:08:35.592799 2669 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 5 22:08:36.293433 systemd[1]: Started sshd@9-10.0.0.66:22-10.0.0.1:35330.service - OpenSSH per-connection server daemon (10.0.0.1:35330). Aug 5 22:08:36.351035 sshd[4680]: Accepted publickey for core from 10.0.0.1 port 35330 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:36.352485 sshd[4680]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:36.356912 systemd-logind[1525]: New session 10 of user core. Aug 5 22:08:36.361314 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:08:36.484551 sshd[4680]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:36.492315 systemd[1]: Started sshd@10-10.0.0.66:22-10.0.0.1:35334.service - OpenSSH per-connection server daemon (10.0.0.1:35334). Aug 5 22:08:36.492690 systemd[1]: sshd@9-10.0.0.66:22-10.0.0.1:35330.service: Deactivated successfully. Aug 5 22:08:36.495280 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:08:36.496517 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:08:36.498544 systemd-logind[1525]: Removed session 10. Aug 5 22:08:36.529057 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 35334 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:36.530309 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:36.534676 systemd-logind[1525]: New session 11 of user core. Aug 5 22:08:36.545372 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:08:36.784382 sshd[4695]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:36.793359 systemd[1]: Started sshd@11-10.0.0.66:22-10.0.0.1:35342.service - OpenSSH per-connection server daemon (10.0.0.1:35342). Aug 5 22:08:36.795610 systemd[1]: sshd@10-10.0.0.66:22-10.0.0.1:35334.service: Deactivated successfully. Aug 5 22:08:36.804800 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:08:36.804893 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:08:36.806787 systemd-logind[1525]: Removed session 11. Aug 5 22:08:36.843161 sshd[4709]: Accepted publickey for core from 10.0.0.1 port 35342 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:36.844408 sshd[4709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:36.848815 systemd-logind[1525]: New session 12 of user core. Aug 5 22:08:36.860305 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:08:36.983902 sshd[4709]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:36.987324 systemd[1]: sshd@11-10.0.0.66:22-10.0.0.1:35342.service: Deactivated successfully. Aug 5 22:08:36.989424 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:08:36.989517 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:08:36.990503 systemd-logind[1525]: Removed session 12. Aug 5 22:08:41.996343 systemd[1]: Started sshd@12-10.0.0.66:22-10.0.0.1:58938.service - OpenSSH per-connection server daemon (10.0.0.1:58938). Aug 5 22:08:42.035414 sshd[4741]: Accepted publickey for core from 10.0.0.1 port 58938 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:42.036088 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:42.040148 systemd-logind[1525]: New session 13 of user core. Aug 5 22:08:42.050443 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:08:42.157788 sshd[4741]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:42.165364 systemd[1]: Started sshd@13-10.0.0.66:22-10.0.0.1:58950.service - OpenSSH per-connection server daemon (10.0.0.1:58950). Aug 5 22:08:42.165781 systemd[1]: sshd@12-10.0.0.66:22-10.0.0.1:58938.service: Deactivated successfully. Aug 5 22:08:42.170214 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:08:42.171665 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:08:42.174246 systemd-logind[1525]: Removed session 13. Aug 5 22:08:42.211093 sshd[4753]: Accepted publickey for core from 10.0.0.1 port 58950 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:42.212945 sshd[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:42.219691 systemd-logind[1525]: New session 14 of user core. Aug 5 22:08:42.226364 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:08:42.463512 sshd[4753]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:42.471311 systemd[1]: Started sshd@14-10.0.0.66:22-10.0.0.1:58966.service - OpenSSH per-connection server daemon (10.0.0.1:58966). Aug 5 22:08:42.473619 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:08:42.474267 systemd[1]: sshd@13-10.0.0.66:22-10.0.0.1:58950.service: Deactivated successfully. Aug 5 22:08:42.475999 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:08:42.476876 systemd-logind[1525]: Removed session 14. Aug 5 22:08:42.509116 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 58966 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:42.510266 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:42.520180 systemd-logind[1525]: New session 15 of user core. Aug 5 22:08:42.525363 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:08:43.274681 sshd[4766]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:43.286549 systemd[1]: Started sshd@15-10.0.0.66:22-10.0.0.1:58976.service - OpenSSH per-connection server daemon (10.0.0.1:58976). Aug 5 22:08:43.290525 systemd[1]: sshd@14-10.0.0.66:22-10.0.0.1:58966.service: Deactivated successfully. Aug 5 22:08:43.294503 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:08:43.297117 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:08:43.298473 systemd-logind[1525]: Removed session 15. Aug 5 22:08:43.335858 sshd[4787]: Accepted publickey for core from 10.0.0.1 port 58976 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:43.337171 sshd[4787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:43.341359 systemd-logind[1525]: New session 16 of user core. Aug 5 22:08:43.359351 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:08:43.646486 sshd[4787]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:43.651673 systemd[1]: Started sshd@16-10.0.0.66:22-10.0.0.1:58984.service - OpenSSH per-connection server daemon (10.0.0.1:58984). Aug 5 22:08:43.652111 systemd[1]: sshd@15-10.0.0.66:22-10.0.0.1:58976.service: Deactivated successfully. Aug 5 22:08:43.657844 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:08:43.660317 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:08:43.662550 systemd-logind[1525]: Removed session 16. Aug 5 22:08:43.697551 sshd[4803]: Accepted publickey for core from 10.0.0.1 port 58984 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:43.698920 sshd[4803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:43.702752 systemd-logind[1525]: New session 17 of user core. Aug 5 22:08:43.715302 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:08:43.836985 sshd[4803]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:43.840751 systemd[1]: sshd@16-10.0.0.66:22-10.0.0.1:58984.service: Deactivated successfully. Aug 5 22:08:43.843121 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:08:43.843465 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:08:43.844930 systemd-logind[1525]: Removed session 17. Aug 5 22:08:48.854354 systemd[1]: Started sshd@17-10.0.0.66:22-10.0.0.1:59000.service - OpenSSH per-connection server daemon (10.0.0.1:59000). Aug 5 22:08:48.887017 sshd[4844]: Accepted publickey for core from 10.0.0.1 port 59000 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:48.888186 sshd[4844]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:48.892266 systemd-logind[1525]: New session 18 of user core. Aug 5 22:08:48.907336 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:08:49.039326 sshd[4844]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:49.042296 systemd[1]: sshd@17-10.0.0.66:22-10.0.0.1:59000.service: Deactivated successfully. Aug 5 22:08:49.046562 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:08:49.048868 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:08:49.050852 systemd-logind[1525]: Removed session 18. Aug 5 22:08:49.409368 containerd[1548]: time="2024-08-05T22:08:49.409314882Z" level=info msg="StopPodSandbox for \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\"" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.458 [WARNING][4874] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--swc99-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ed3c9326-5dc5-42ed-b57b-01a76d3c682c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908", Pod:"coredns-5dd5756b68-swc99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35125f6cd0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.458 [INFO][4874] k8s.go 608: Cleaning up netns ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.458 [INFO][4874] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" iface="eth0" netns="" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.463 [INFO][4874] k8s.go 615: Releasing IP address(es) ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.463 [INFO][4874] utils.go 188: Calico CNI releasing IP address ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.504 [INFO][4883] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.504 [INFO][4883] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.504 [INFO][4883] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.513 [WARNING][4883] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.513 [INFO][4883] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.514 [INFO][4883] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:49.518718 containerd[1548]: 2024-08-05 22:08:49.516 [INFO][4874] k8s.go 621: Teardown processing complete. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.519423 containerd[1548]: time="2024-08-05T22:08:49.519269723Z" level=info msg="TearDown network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\" successfully" Aug 5 22:08:49.519423 containerd[1548]: time="2024-08-05T22:08:49.519302243Z" level=info msg="StopPodSandbox for \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\" returns successfully" Aug 5 22:08:49.519939 containerd[1548]: time="2024-08-05T22:08:49.519901929Z" level=info msg="RemovePodSandbox for \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\"" Aug 5 22:08:49.525995 containerd[1548]: time="2024-08-05T22:08:49.521504746Z" level=info msg="Forcibly stopping sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\"" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.565 [WARNING][4905] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--swc99-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ed3c9326-5dc5-42ed-b57b-01a76d3c682c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"124a66e5b8b239acc11697e0e93884a64e545eb6d24bf0380ffa5f70eeae4908", Pod:"coredns-5dd5756b68-swc99", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35125f6cd0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.566 [INFO][4905] k8s.go 608: Cleaning up netns ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.566 [INFO][4905] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" iface="eth0" netns="" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.566 [INFO][4905] k8s.go 615: Releasing IP address(es) ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.566 [INFO][4905] utils.go 188: Calico CNI releasing IP address ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.590 [INFO][4913] ipam_plugin.go 411: Releasing address using handleID ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.590 [INFO][4913] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.590 [INFO][4913] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.598 [WARNING][4913] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.598 [INFO][4913] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" HandleID="k8s-pod-network.0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Workload="localhost-k8s-coredns--5dd5756b68--swc99-eth0" Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.600 [INFO][4913] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:49.604360 containerd[1548]: 2024-08-05 22:08:49.602 [INFO][4905] k8s.go 621: Teardown processing complete. ContainerID="0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885" Aug 5 22:08:49.604743 containerd[1548]: time="2024-08-05T22:08:49.604397381Z" level=info msg="TearDown network for sandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\" successfully" Aug 5 22:08:49.620807 containerd[1548]: time="2024-08-05T22:08:49.620745954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:08:49.620935 containerd[1548]: time="2024-08-05T22:08:49.620855155Z" level=info msg="RemovePodSandbox \"0d84846eb88d9554e71031ecba58d21bd4a31a5e546277d48757a88487290885\" returns successfully" Aug 5 22:08:49.622075 containerd[1548]: time="2024-08-05T22:08:49.621546042Z" level=info msg="StopPodSandbox for \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\"" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.661 [WARNING][4936] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0", GenerateName:"calico-kube-controllers-fb8459779-", Namespace:"calico-system", SelfLink:"", UID:"af2ff405-41a2-41af-be9d-df5d15c0abae", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb8459779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e", Pod:"calico-kube-controllers-fb8459779-ndctr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67ea92ce7eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.661 [INFO][4936] k8s.go 608: Cleaning up netns ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.661 [INFO][4936] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" iface="eth0" netns="" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.661 [INFO][4936] k8s.go 615: Releasing IP address(es) ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.661 [INFO][4936] utils.go 188: Calico CNI releasing IP address ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.687 [INFO][4944] ipam_plugin.go 411: Releasing address using handleID ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.687 [INFO][4944] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.687 [INFO][4944] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.697 [WARNING][4944] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.697 [INFO][4944] ipam_plugin.go 439: Releasing address using workloadID ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.698 [INFO][4944] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:49.701972 containerd[1548]: 2024-08-05 22:08:49.700 [INFO][4936] k8s.go 621: Teardown processing complete. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.701972 containerd[1548]: time="2024-08-05T22:08:49.701945691Z" level=info msg="TearDown network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\" successfully" Aug 5 22:08:49.701972 containerd[1548]: time="2024-08-05T22:08:49.701971211Z" level=info msg="StopPodSandbox for \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\" returns successfully" Aug 5 22:08:49.703185 containerd[1548]: time="2024-08-05T22:08:49.702496417Z" level=info msg="RemovePodSandbox for \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\"" Aug 5 22:08:49.703185 containerd[1548]: time="2024-08-05T22:08:49.702522977Z" level=info msg="Forcibly stopping sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\"" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.734 [WARNING][4966] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0", GenerateName:"calico-kube-controllers-fb8459779-", Namespace:"calico-system", SelfLink:"", UID:"af2ff405-41a2-41af-be9d-df5d15c0abae", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"fb8459779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8558cc732ee3e219875bf4408faa9d7fac321e4e081868b292080f4ecb37fc4e", Pod:"calico-kube-controllers-fb8459779-ndctr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali67ea92ce7eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.734 [INFO][4966] k8s.go 608: Cleaning up netns ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.734 [INFO][4966] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" iface="eth0" netns="" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.734 [INFO][4966] k8s.go 615: Releasing IP address(es) ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.734 [INFO][4966] utils.go 188: Calico CNI releasing IP address ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.751 [INFO][4973] ipam_plugin.go 411: Releasing address using handleID ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.751 [INFO][4973] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.751 [INFO][4973] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.759 [WARNING][4973] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.759 [INFO][4973] ipam_plugin.go 439: Releasing address using workloadID ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" HandleID="k8s-pod-network.51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Workload="localhost-k8s-calico--kube--controllers--fb8459779--ndctr-eth0" Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.760 [INFO][4973] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:49.763911 containerd[1548]: 2024-08-05 22:08:49.762 [INFO][4966] k8s.go 621: Teardown processing complete. ContainerID="51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3" Aug 5 22:08:49.764327 containerd[1548]: time="2024-08-05T22:08:49.763940146Z" level=info msg="TearDown network for sandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\" successfully" Aug 5 22:08:49.766612 containerd[1548]: time="2024-08-05T22:08:49.766545133Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:08:49.766649 containerd[1548]: time="2024-08-05T22:08:49.766636774Z" level=info msg="RemovePodSandbox \"51108d91570c272db6a04a27f08ec4e056899d5991a2f2f888b78530b92343f3\" returns successfully" Aug 5 22:08:49.767131 containerd[1548]: time="2024-08-05T22:08:49.767105059Z" level=info msg="StopPodSandbox for \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\"" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.800 [WARNING][4995] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bg8wn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d1686794-670e-4b6f-b373-abdfd1581032", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c", Pod:"coredns-5dd5756b68-bg8wn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3faabd16240", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.800 [INFO][4995] k8s.go 608: Cleaning up netns ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.800 [INFO][4995] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" iface="eth0" netns="" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.800 [INFO][4995] k8s.go 615: Releasing IP address(es) ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.800 [INFO][4995] utils.go 188: Calico CNI releasing IP address ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.819 [INFO][5002] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.819 [INFO][5002] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.819 [INFO][5002] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.827 [WARNING][5002] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.827 [INFO][5002] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.829 [INFO][5002] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:49.832396 containerd[1548]: 2024-08-05 22:08:49.831 [INFO][4995] k8s.go 621: Teardown processing complete. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.832798 containerd[1548]: time="2024-08-05T22:08:49.832426429Z" level=info msg="TearDown network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\" successfully" Aug 5 22:08:49.832798 containerd[1548]: time="2024-08-05T22:08:49.832450829Z" level=info msg="StopPodSandbox for \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\" returns successfully" Aug 5 22:08:49.833260 containerd[1548]: time="2024-08-05T22:08:49.833233237Z" level=info msg="RemovePodSandbox for \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\"" Aug 5 22:08:49.833328 containerd[1548]: time="2024-08-05T22:08:49.833285398Z" level=info msg="Forcibly stopping sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\"" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.865 [WARNING][5025] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--bg8wn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"d1686794-670e-4b6f-b373-abdfd1581032", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4dec22f492d1f460d542f6064262f8d051c4dfb723ede2f18eaae131cfed546c", Pod:"coredns-5dd5756b68-bg8wn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3faabd16240", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.866 [INFO][5025] k8s.go 608: Cleaning up netns ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.866 [INFO][5025] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" iface="eth0" netns="" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.866 [INFO][5025] k8s.go 615: Releasing IP address(es) ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.866 [INFO][5025] utils.go 188: Calico CNI releasing IP address ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.883 [INFO][5033] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.883 [INFO][5033] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.883 [INFO][5033] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.891 [WARNING][5033] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.891 [INFO][5033] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" HandleID="k8s-pod-network.1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Workload="localhost-k8s-coredns--5dd5756b68--bg8wn-eth0" Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.893 [INFO][5033] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:49.896188 containerd[1548]: 2024-08-05 22:08:49.894 [INFO][5025] k8s.go 621: Teardown processing complete. ContainerID="1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17" Aug 5 22:08:49.896572 containerd[1548]: time="2024-08-05T22:08:49.896219142Z" level=info msg="TearDown network for sandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\" successfully" Aug 5 22:08:49.898573 containerd[1548]: time="2024-08-05T22:08:49.898539286Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:08:49.898623 containerd[1548]: time="2024-08-05T22:08:49.898602127Z" level=info msg="RemovePodSandbox \"1f7b2f2fbbedc9544a5090f4f321d760fa0ed7118c9358fcf30b6e54e63c1c17\" returns successfully" Aug 5 22:08:49.899097 containerd[1548]: time="2024-08-05T22:08:49.899072732Z" level=info msg="StopPodSandbox for \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\"" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.933 [WARNING][5056] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gh6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2005850-2566-453e-9840-897b314819a1", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1", Pod:"csi-node-driver-8gh6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0561a77cdcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.934 [INFO][5056] k8s.go 608: Cleaning up netns ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.934 [INFO][5056] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" iface="eth0" netns="" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.934 [INFO][5056] k8s.go 615: Releasing IP address(es) ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.934 [INFO][5056] utils.go 188: Calico CNI releasing IP address ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.951 [INFO][5065] ipam_plugin.go 411: Releasing address using handleID ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.951 [INFO][5065] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.951 [INFO][5065] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.959 [WARNING][5065] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.959 [INFO][5065] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.960 [INFO][5065] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:49.963819 containerd[1548]: 2024-08-05 22:08:49.962 [INFO][5056] k8s.go 621: Teardown processing complete. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:49.963819 containerd[1548]: time="2024-08-05T22:08:49.963642414Z" level=info msg="TearDown network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\" successfully" Aug 5 22:08:49.963819 containerd[1548]: time="2024-08-05T22:08:49.963666214Z" level=info msg="StopPodSandbox for \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\" returns successfully" Aug 5 22:08:49.965814 containerd[1548]: time="2024-08-05T22:08:49.965489113Z" level=info msg="RemovePodSandbox for \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\"" Aug 5 22:08:49.965814 containerd[1548]: time="2024-08-05T22:08:49.965528874Z" level=info msg="Forcibly stopping sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\"" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:49.997 [WARNING][5088] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gh6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a2005850-2566-453e-9840-897b314819a1", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 8, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"03fd3db1c0eee5a7a291e0d635944df6b6d46a69accb5110b0c95eeccadb84b1", Pod:"csi-node-driver-8gh6b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali0561a77cdcb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:49.998 [INFO][5088] k8s.go 608: Cleaning up netns ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:49.998 [INFO][5088] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" iface="eth0" netns="" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:49.998 [INFO][5088] k8s.go 615: Releasing IP address(es) ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:49.998 [INFO][5088] utils.go 188: Calico CNI releasing IP address ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:50.014 [INFO][5095] ipam_plugin.go 411: Releasing address using handleID ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:50.014 [INFO][5095] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:50.014 [INFO][5095] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:50.023 [WARNING][5095] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:50.023 [INFO][5095] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" HandleID="k8s-pod-network.6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Workload="localhost-k8s-csi--node--driver--8gh6b-eth0" Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:50.025 [INFO][5095] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:08:50.028434 containerd[1548]: 2024-08-05 22:08:50.026 [INFO][5088] k8s.go 621: Teardown processing complete. ContainerID="6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734" Aug 5 22:08:50.028434 containerd[1548]: time="2024-08-05T22:08:50.027903169Z" level=info msg="TearDown network for sandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\" successfully" Aug 5 22:08:50.031041 containerd[1548]: time="2024-08-05T22:08:50.031010282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:08:50.031241 containerd[1548]: time="2024-08-05T22:08:50.031192404Z" level=info msg="RemovePodSandbox \"6dae6203c42f5145118cf6395f9509faa2639c9ec9d5493e0b706f30a9e27734\" returns successfully" Aug 5 22:08:54.049345 systemd[1]: Started sshd@18-10.0.0.66:22-10.0.0.1:43736.service - OpenSSH per-connection server daemon (10.0.0.1:43736). Aug 5 22:08:54.083114 sshd[5129]: Accepted publickey for core from 10.0.0.1 port 43736 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:54.083759 sshd[5129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:54.087645 systemd-logind[1525]: New session 19 of user core. Aug 5 22:08:54.104317 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:08:54.214058 sshd[5129]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:54.217898 systemd[1]: sshd@18-10.0.0.66:22-10.0.0.1:43736.service: Deactivated successfully. Aug 5 22:08:54.220228 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:08:54.220294 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:08:54.222573 systemd-logind[1525]: Removed session 19. Aug 5 22:08:59.229319 systemd[1]: Started sshd@19-10.0.0.66:22-10.0.0.1:43750.service - OpenSSH per-connection server daemon (10.0.0.1:43750). Aug 5 22:08:59.262615 sshd[5144]: Accepted publickey for core from 10.0.0.1 port 43750 ssh2: RSA SHA256:m+vSf9MZ8jyHy+Dz2uz+ngzM5NRoRVVH/LZDa5ltoPE Aug 5 22:08:59.263774 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:08:59.267169 systemd-logind[1525]: New session 20 of user core. Aug 5 22:08:59.277330 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:08:59.383237 sshd[5144]: pam_unix(sshd:session): session closed for user core Aug 5 22:08:59.386903 systemd[1]: sshd@19-10.0.0.66:22-10.0.0.1:43750.service: Deactivated successfully. Aug 5 22:08:59.389396 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:08:59.390412 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:08:59.391150 systemd-logind[1525]: Removed session 20.