Jul 2 00:02:20.939811 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:02:20.939834 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 00:02:20.939844 kernel: KASLR enabled Jul 2 00:02:20.939850 kernel: efi: EFI v2.7 by EDK II Jul 2 00:02:20.939856 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 00:02:20.939862 kernel: random: crng init done Jul 2 00:02:20.939869 kernel: ACPI: Early table checksum verification disabled Jul 2 00:02:20.939875 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 00:02:20.939881 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 00:02:20.939889 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939895 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939902 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939908 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939914 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939922 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939929 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939936 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939943 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 00:02:20.939950 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 00:02:20.939956 kernel: NUMA: Failed to initialise from firmware Jul 2 00:02:20.939963 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:02:20.939970 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Jul 2 00:02:20.939976 kernel: Zone ranges: Jul 2 00:02:20.939983 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:02:20.939989 kernel: DMA32 empty Jul 2 00:02:20.939997 kernel: Normal empty Jul 2 00:02:20.940004 kernel: Movable zone start for each node Jul 2 00:02:20.940010 kernel: Early memory node ranges Jul 2 00:02:20.940017 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 00:02:20.940024 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 00:02:20.940030 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 00:02:20.940037 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 00:02:20.940044 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 00:02:20.940050 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 00:02:20.940056 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 00:02:20.940063 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 00:02:20.940069 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 00:02:20.940077 kernel: psci: probing for conduit method from ACPI. Jul 2 00:02:20.940083 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:02:20.940090 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:02:20.940099 kernel: psci: Trusted OS migration not required Jul 2 00:02:20.940106 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:02:20.940113 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 00:02:20.940122 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 00:02:20.940129 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 00:02:20.940136 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 00:02:20.940143 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:02:20.940150 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:02:20.940157 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:02:20.940164 kernel: CPU features: detected: Spectre-v4 Jul 2 00:02:20.940171 kernel: CPU features: detected: Spectre-BHB Jul 2 00:02:20.940178 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:02:20.940185 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:02:20.940194 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:02:20.940201 kernel: alternatives: applying boot alternatives Jul 2 00:02:20.940209 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:02:20.940216 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:02:20.940223 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:02:20.940230 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:02:20.940237 kernel: Fallback order for Node 0: 0 Jul 2 00:02:20.940244 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 00:02:20.940251 kernel: Policy zone: DMA Jul 2 00:02:20.940257 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:02:20.940264 kernel: software IO TLB: area num 4. Jul 2 00:02:20.940272 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 00:02:20.940280 kernel: Memory: 2386844K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185444K reserved, 0K cma-reserved) Jul 2 00:02:20.940287 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 00:02:20.940294 kernel: trace event string verifier disabled Jul 2 00:02:20.940301 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:02:20.940308 kernel: rcu: RCU event tracing is enabled. Jul 2 00:02:20.940326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 00:02:20.940334 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:02:20.940341 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:02:20.940348 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:02:20.940355 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 00:02:20.940362 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:02:20.940371 kernel: GICv3: 256 SPIs implemented Jul 2 00:02:20.940378 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:02:20.940385 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:02:20.940392 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 00:02:20.940399 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 00:02:20.940405 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 00:02:20.940412 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:02:20.940419 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:02:20.940426 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 00:02:20.940434 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 00:02:20.940441 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:02:20.940449 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:02:20.940456 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:02:20.940463 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:02:20.940470 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:02:20.940477 kernel: arm-pv: using stolen time PV Jul 2 00:02:20.940485 kernel: Console: colour dummy device 80x25 Jul 2 00:02:20.940510 kernel: ACPI: Core revision 20230628 Jul 2 00:02:20.940518 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:02:20.940525 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:02:20.940533 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:02:20.940542 kernel: SELinux: Initializing. Jul 2 00:02:20.940549 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:02:20.940556 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:02:20.940563 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:02:20.940570 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:02:20.940577 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:02:20.940585 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:02:20.940592 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 00:02:20.940599 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 00:02:20.940607 kernel: Remapping and enabling EFI services. Jul 2 00:02:20.940614 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:02:20.940621 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:02:20.940628 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 00:02:20.940636 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 00:02:20.940643 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:02:20.940650 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:02:20.940657 kernel: Detected PIPT I-cache on CPU2 Jul 2 00:02:20.940664 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 00:02:20.940671 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 00:02:20.940680 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:02:20.940687 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 00:02:20.940699 kernel: Detected PIPT I-cache on CPU3 Jul 2 00:02:20.940708 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 00:02:20.940715 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 00:02:20.940723 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:02:20.940730 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 00:02:20.940737 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 00:02:20.940745 kernel: SMP: Total of 4 processors activated. Jul 2 00:02:20.940754 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:02:20.940761 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:02:20.940768 kernel: CPU features: detected: Common not Private translations Jul 2 00:02:20.940776 kernel: CPU features: detected: CRC32 instructions Jul 2 00:02:20.940783 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 00:02:20.940791 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:02:20.940798 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:02:20.940806 kernel: CPU features: detected: Privileged Access Never Jul 2 00:02:20.940815 kernel: CPU features: detected: RAS Extension Support Jul 2 00:02:20.940823 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 00:02:20.940830 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:02:20.940837 kernel: alternatives: applying system-wide alternatives Jul 2 00:02:20.940845 kernel: devtmpfs: initialized Jul 2 00:02:20.940853 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:02:20.940860 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 00:02:20.940867 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:02:20.940875 kernel: SMBIOS 3.0.0 present. Jul 2 00:02:20.940884 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 00:02:20.940891 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:02:20.940899 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:02:20.940906 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:02:20.940913 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:02:20.940921 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:02:20.940928 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 2 00:02:20.940936 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:02:20.940943 kernel: cpuidle: using governor menu Jul 2 00:02:20.940952 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:02:20.940959 kernel: ASID allocator initialised with 32768 entries Jul 2 00:02:20.940967 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:02:20.940974 kernel: Serial: AMBA PL011 UART driver Jul 2 00:02:20.940981 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 00:02:20.940989 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 00:02:20.940996 kernel: Modules: 509120 pages in range for PLT usage Jul 2 00:02:20.941004 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:02:20.941012 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:02:20.941021 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:02:20.941029 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 00:02:20.941036 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:02:20.941044 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:02:20.941051 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:02:20.941059 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 00:02:20.941066 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:02:20.941073 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:02:20.941081 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:02:20.941090 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:02:20.941097 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:02:20.941105 kernel: ACPI: Interpreter enabled Jul 2 00:02:20.941112 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:02:20.941119 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:02:20.941127 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:02:20.941134 kernel: printk: console [ttyAMA0] enabled Jul 2 00:02:20.941141 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:02:20.941284 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:02:20.941373 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:02:20.941442 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:02:20.941524 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 00:02:20.941593 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 00:02:20.941603 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 00:02:20.941611 kernel: PCI host bridge to bus 0000:00 Jul 2 00:02:20.941699 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 00:02:20.941767 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:02:20.941829 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 00:02:20.941891 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:02:20.941974 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 00:02:20.942053 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 00:02:20.942123 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 00:02:20.942195 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 00:02:20.942266 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:02:20.942346 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 00:02:20.942457 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 00:02:20.942571 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 00:02:20.942638 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 00:02:20.942701 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:02:20.942769 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 00:02:20.942779 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:02:20.942787 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:02:20.942794 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:02:20.942802 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:02:20.942810 kernel: iommu: Default domain type: Translated Jul 2 00:02:20.942817 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:02:20.942825 kernel: efivars: Registered efivars operations Jul 2 00:02:20.942833 kernel: vgaarb: loaded Jul 2 00:02:20.942842 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:02:20.942850 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:02:20.942857 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:02:20.942865 kernel: pnp: PnP ACPI init Jul 2 00:02:20.942957 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 00:02:20.942969 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:02:20.942976 kernel: NET: Registered PF_INET protocol family Jul 2 00:02:20.942984 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:02:20.942994 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:02:20.943002 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:02:20.943010 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:02:20.943017 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:02:20.943025 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:02:20.943033 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:02:20.943041 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:02:20.943052 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:02:20.943059 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:02:20.943069 kernel: kvm [1]: HYP mode not available Jul 2 00:02:20.943076 kernel: Initialise system trusted keyrings Jul 2 00:02:20.943084 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:02:20.943092 kernel: Key type asymmetric registered Jul 2 00:02:20.943101 kernel: Asymmetric key parser 'x509' registered Jul 2 00:02:20.943109 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 00:02:20.943119 kernel: io scheduler mq-deadline registered Jul 2 00:02:20.943129 kernel: io scheduler kyber registered Jul 2 00:02:20.943136 kernel: io scheduler bfq registered Jul 2 00:02:20.943146 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:02:20.943154 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:02:20.943162 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:02:20.943237 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 00:02:20.943247 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:02:20.943255 kernel: thunder_xcv, ver 1.0 Jul 2 00:02:20.943263 kernel: thunder_bgx, ver 1.0 Jul 2 00:02:20.943270 kernel: nicpf, ver 1.0 Jul 2 00:02:20.943277 kernel: nicvf, ver 1.0 Jul 2 00:02:20.943404 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:02:20.943477 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:02:20 UTC (1719878540) Jul 2 00:02:20.943500 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:02:20.943509 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 00:02:20.943517 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 00:02:20.943524 kernel: watchdog: Hard watchdog permanently disabled Jul 2 00:02:20.943532 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:02:20.943540 kernel: Segment Routing with IPv6 Jul 2 00:02:20.943552 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:02:20.943559 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:02:20.943567 kernel: Key type dns_resolver registered Jul 2 00:02:20.943574 kernel: registered taskstats version 1 Jul 2 00:02:20.943582 kernel: Loading compiled-in X.509 certificates Jul 2 00:02:20.943590 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 00:02:20.943597 kernel: Key type .fscrypt registered Jul 2 00:02:20.943605 kernel: Key type fscrypt-provisioning registered Jul 2 00:02:20.943612 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:02:20.943622 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:02:20.943630 kernel: ima: No architecture policies found Jul 2 00:02:20.943637 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:02:20.943645 kernel: clk: Disabling unused clocks Jul 2 00:02:20.943652 kernel: Freeing unused kernel memory: 39040K Jul 2 00:02:20.943660 kernel: Run /init as init process Jul 2 00:02:20.943667 kernel: with arguments: Jul 2 00:02:20.943674 kernel: /init Jul 2 00:02:20.943682 kernel: with environment: Jul 2 00:02:20.943690 kernel: HOME=/ Jul 2 00:02:20.943698 kernel: TERM=linux Jul 2 00:02:20.943705 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:02:20.943714 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:02:20.943724 systemd[1]: Detected virtualization kvm. Jul 2 00:02:20.943732 systemd[1]: Detected architecture arm64. Jul 2 00:02:20.943740 systemd[1]: Running in initrd. Jul 2 00:02:20.943749 systemd[1]: No hostname configured, using default hostname. Jul 2 00:02:20.943757 systemd[1]: Hostname set to . Jul 2 00:02:20.943765 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:02:20.943773 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:02:20.943781 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:02:20.943790 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:02:20.943798 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:02:20.943807 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:02:20.943817 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:02:20.943825 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:02:20.943835 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:02:20.943843 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:02:20.943852 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:02:20.943860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:02:20.943868 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:02:20.943878 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:02:20.943886 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:02:20.943894 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:02:20.943902 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:02:20.943910 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:02:20.943918 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:02:20.943927 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:02:20.943935 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:02:20.943943 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:02:20.943953 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:02:20.943960 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:02:20.943969 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:02:20.943977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:02:20.943985 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:02:20.943993 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:02:20.944001 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:02:20.944009 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:02:20.944019 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:02:20.944027 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:02:20.944036 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:02:20.944044 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:02:20.944053 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:02:20.944063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:02:20.944091 systemd-journald[237]: Collecting audit messages is disabled. Jul 2 00:02:20.944111 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:02:20.944120 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:02:20.944131 systemd-journald[237]: Journal started Jul 2 00:02:20.944151 systemd-journald[237]: Runtime Journal (/run/log/journal/ad002bf0835e411792846ff9bc2139d6) is 5.9M, max 47.3M, 41.4M free. Jul 2 00:02:20.954600 kernel: Bridge firewalling registered Jul 2 00:02:20.932119 systemd-modules-load[238]: Inserted module 'overlay' Jul 2 00:02:20.950668 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 2 00:02:20.957940 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:02:20.960029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:02:20.961803 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:02:20.963531 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:02:20.969747 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:02:20.971473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:02:20.972718 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:02:20.975698 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:02:20.979106 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:02:20.982742 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:02:20.983964 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:02:20.987145 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:02:20.995973 dracut-cmdline[271]: dracut-dracut-053 Jul 2 00:02:20.998586 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:02:21.013169 systemd-resolved[274]: Positive Trust Anchors: Jul 2 00:02:21.013189 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:02:21.013220 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:02:21.022868 systemd-resolved[274]: Defaulting to hostname 'linux'. Jul 2 00:02:21.024586 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:02:21.025588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:02:21.095505 kernel: SCSI subsystem initialized Jul 2 00:02:21.098509 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:02:21.106541 kernel: iscsi: registered transport (tcp) Jul 2 00:02:21.120212 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:02:21.120267 kernel: QLogic iSCSI HBA Driver Jul 2 00:02:21.162932 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:02:21.170680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:02:21.188221 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:02:21.188296 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:02:21.189588 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:02:21.239528 kernel: raid6: neonx8 gen() 15694 MB/s Jul 2 00:02:21.256513 kernel: raid6: neonx4 gen() 15648 MB/s Jul 2 00:02:21.273513 kernel: raid6: neonx2 gen() 13163 MB/s Jul 2 00:02:21.290508 kernel: raid6: neonx1 gen() 10451 MB/s Jul 2 00:02:21.307510 kernel: raid6: int64x8 gen() 6906 MB/s Jul 2 00:02:21.324509 kernel: raid6: int64x4 gen() 7270 MB/s Jul 2 00:02:21.341510 kernel: raid6: int64x2 gen() 6098 MB/s Jul 2 00:02:21.358509 kernel: raid6: int64x1 gen() 5034 MB/s Jul 2 00:02:21.358522 kernel: raid6: using algorithm neonx8 gen() 15694 MB/s Jul 2 00:02:21.375512 kernel: raid6: .... xor() 11913 MB/s, rmw enabled Jul 2 00:02:21.375524 kernel: raid6: using neon recovery algorithm Jul 2 00:02:21.380791 kernel: xor: measuring software checksum speed Jul 2 00:02:21.380806 kernel: 8regs : 19854 MB/sec Jul 2 00:02:21.381659 kernel: 32regs : 19697 MB/sec Jul 2 00:02:21.382841 kernel: arm64_neon : 27206 MB/sec Jul 2 00:02:21.382855 kernel: xor: using function: arm64_neon (27206 MB/sec) Jul 2 00:02:21.438538 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:02:21.451175 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:02:21.467731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:02:21.483260 systemd-udevd[457]: Using default interface naming scheme 'v255'. Jul 2 00:02:21.486562 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:02:21.488956 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:02:21.504805 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 2 00:02:21.533737 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:02:21.543693 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:02:21.583858 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:02:21.590701 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:02:21.605528 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:02:21.606771 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:02:21.607857 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:02:21.610307 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:02:21.619863 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:02:21.629878 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:02:21.633735 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 00:02:21.643433 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 00:02:21.643589 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:02:21.643601 kernel: GPT:9289727 != 19775487 Jul 2 00:02:21.643610 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:02:21.643619 kernel: GPT:9289727 != 19775487 Jul 2 00:02:21.643628 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:02:21.643639 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:02:21.639503 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:02:21.639613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:02:21.640825 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:02:21.642150 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:02:21.642338 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:02:21.645823 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:02:21.654708 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:02:21.669547 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (511) Jul 2 00:02:21.672544 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (505) Jul 2 00:02:21.673006 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:02:21.677901 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 00:02:21.682367 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 00:02:21.687052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:02:21.690880 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 00:02:21.691865 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 00:02:21.711669 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:02:21.714142 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:02:21.718766 disk-uuid[549]: Primary Header is updated. Jul 2 00:02:21.718766 disk-uuid[549]: Secondary Entries is updated. Jul 2 00:02:21.718766 disk-uuid[549]: Secondary Header is updated. Jul 2 00:02:21.721504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:02:21.747515 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:02:22.736520 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 00:02:22.737943 disk-uuid[551]: The operation has completed successfully. Jul 2 00:02:22.760847 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:02:22.760970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:02:22.782723 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:02:22.787401 sh[573]: Success Jul 2 00:02:22.802560 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:02:22.853083 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:02:22.855105 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:02:22.856923 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:02:22.869949 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 2 00:02:22.870000 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:02:22.870021 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:02:22.870794 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:02:22.871879 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:02:22.876518 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:02:22.878015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:02:22.895705 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:02:22.897513 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:02:22.910013 kernel: BTRFS info (device vda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:02:22.910063 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:02:22.910074 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:02:22.913533 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:02:22.922670 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:02:22.924382 kernel: BTRFS info (device vda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:02:22.931277 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:02:22.942685 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:02:23.019227 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:02:23.030723 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:02:23.070748 systemd-networkd[759]: lo: Link UP Jul 2 00:02:23.070760 systemd-networkd[759]: lo: Gained carrier Jul 2 00:02:23.071821 systemd-networkd[759]: Enumeration completed Jul 2 00:02:23.072709 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:02:23.072713 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:02:23.073231 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:02:23.074521 systemd-networkd[759]: eth0: Link UP Jul 2 00:02:23.074525 systemd-networkd[759]: eth0: Gained carrier Jul 2 00:02:23.074536 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:02:23.074858 systemd[1]: Reached target network.target - Network. Jul 2 00:02:23.095328 ignition[667]: Ignition 2.18.0 Jul 2 00:02:23.095340 ignition[667]: Stage: fetch-offline Jul 2 00:02:23.095378 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:02:23.095387 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:02:23.098584 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:02:23.095482 ignition[667]: parsed url from cmdline: "" Jul 2 00:02:23.095486 ignition[667]: no config URL provided Jul 2 00:02:23.095507 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:02:23.095515 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:02:23.095540 ignition[667]: op(1): [started] loading QEMU firmware config module Jul 2 00:02:23.095544 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 00:02:23.115645 ignition[667]: op(1): [finished] loading QEMU firmware config module Jul 2 00:02:23.115671 ignition[667]: QEMU firmware config was not found. Ignoring... Jul 2 00:02:23.154573 ignition[667]: parsing config with SHA512: 6d56b483abf5fcb260f8ba65d320b201267f4c7c30f228178766bc58aa7128ec5b6ce500db3602e516d627ffc0b4d8a24a3a257a33f5dfdb686843ff8aadb442 Jul 2 00:02:23.158757 unknown[667]: fetched base config from "system" Jul 2 00:02:23.158768 unknown[667]: fetched user config from "qemu" Jul 2 00:02:23.159255 ignition[667]: fetch-offline: fetch-offline passed Jul 2 00:02:23.159318 ignition[667]: Ignition finished successfully Jul 2 00:02:23.161470 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:02:23.163525 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 00:02:23.172655 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:02:23.184955 ignition[774]: Ignition 2.18.0 Jul 2 00:02:23.184967 ignition[774]: Stage: kargs Jul 2 00:02:23.185145 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:02:23.185155 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:02:23.189767 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:02:23.186040 ignition[774]: kargs: kargs passed Jul 2 00:02:23.186091 ignition[774]: Ignition finished successfully Jul 2 00:02:23.204785 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:02:23.215963 ignition[783]: Ignition 2.18.0 Jul 2 00:02:23.215974 ignition[783]: Stage: disks Jul 2 00:02:23.216149 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:02:23.216160 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:02:23.219998 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:02:23.217049 ignition[783]: disks: disks passed Jul 2 00:02:23.221649 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:02:23.217098 ignition[783]: Ignition finished successfully Jul 2 00:02:23.223896 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:02:23.225706 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:02:23.228166 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:02:23.230101 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:02:23.242673 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:02:23.255128 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:02:23.403169 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:02:23.413730 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:02:23.458502 kernel: EXT4-fs (vda9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 2 00:02:23.458812 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:02:23.459915 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:02:23.470637 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:02:23.473037 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:02:23.473940 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:02:23.473984 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:02:23.474006 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:02:23.480655 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:02:23.484002 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:02:23.488159 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jul 2 00:02:23.488200 kernel: BTRFS info (device vda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:02:23.488212 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:02:23.488221 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:02:23.492040 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:02:23.492385 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:02:23.549617 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:02:23.554974 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:02:23.558657 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:02:23.563793 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:02:23.666406 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:02:23.675901 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:02:23.678374 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:02:23.683528 kernel: BTRFS info (device vda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:02:23.703512 ignition[916]: INFO : Ignition 2.18.0 Jul 2 00:02:23.703512 ignition[916]: INFO : Stage: mount Jul 2 00:02:23.705318 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:02:23.705318 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:02:23.705318 ignition[916]: INFO : mount: mount passed Jul 2 00:02:23.705318 ignition[916]: INFO : Ignition finished successfully Jul 2 00:02:23.706650 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:02:23.709939 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:02:23.718608 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:02:23.868804 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:02:23.890204 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:02:23.901938 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Jul 2 00:02:23.901980 kernel: BTRFS info (device vda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:02:23.901991 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:02:23.902593 kernel: BTRFS info (device vda6): using free space tree Jul 2 00:02:23.905514 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 00:02:23.906544 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:02:23.923966 ignition[947]: INFO : Ignition 2.18.0 Jul 2 00:02:23.923966 ignition[947]: INFO : Stage: files Jul 2 00:02:23.925218 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:02:23.925218 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:02:23.925218 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:02:23.928051 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:02:23.928051 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:02:23.930033 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:02:23.930033 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:02:23.930033 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:02:23.928779 unknown[947]: wrote ssh authorized keys file for user: core Jul 2 00:02:23.933905 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:02:23.933905 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:02:23.971008 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:02:24.012599 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:02:24.012599 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:02:24.015348 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 00:02:24.329783 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:02:24.677011 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 00:02:24.677011 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 00:02:24.679658 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 00:02:24.706983 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:02:24.710895 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 00:02:24.712060 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 00:02:24.712060 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:02:24.712060 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:02:24.712060 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:02:24.712060 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:02:24.712060 ignition[947]: INFO : files: files passed Jul 2 00:02:24.712060 ignition[947]: INFO : Ignition finished successfully Jul 2 00:02:24.713550 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:02:24.726688 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:02:24.728665 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:02:24.731172 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:02:24.731268 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:02:24.736070 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 00:02:24.739476 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:02:24.739476 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:02:24.743189 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:02:24.745294 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:02:24.746790 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:02:24.760664 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:02:24.783615 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:02:24.783752 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:02:24.786025 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:02:24.787942 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:02:24.789749 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:02:24.790593 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:02:24.806093 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:02:24.818668 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:02:24.826741 systemd[1]: Stopped target network.target - Network. Jul 2 00:02:24.827524 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:02:24.828792 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:02:24.830266 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:02:24.832050 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:02:24.832179 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:02:24.834045 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:02:24.835597 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:02:24.836789 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:02:24.838032 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:02:24.839412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:02:24.842248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:02:24.843601 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:02:24.846167 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:02:24.847592 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:02:24.849031 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:02:24.850109 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:02:24.850232 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:02:24.851904 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:02:24.853330 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:02:24.854796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:02:24.856146 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:02:24.857090 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:02:24.857210 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:02:24.859204 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:02:24.859319 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:02:24.860864 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:02:24.861991 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:02:24.865574 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:02:24.866525 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:02:24.868152 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:02:24.869420 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:02:24.869515 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:02:24.870648 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:02:24.870723 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:02:24.871834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:02:24.871936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:02:24.873209 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:02:24.873314 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:02:24.884675 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:02:24.885361 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:02:24.885487 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:02:24.890743 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:02:24.891578 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:02:24.892838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:02:24.896583 ignition[1003]: INFO : Ignition 2.18.0 Jul 2 00:02:24.896583 ignition[1003]: INFO : Stage: umount Jul 2 00:02:24.896583 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:02:24.896583 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 00:02:24.894743 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:02:24.904849 ignition[1003]: INFO : umount: umount passed Jul 2 00:02:24.904849 ignition[1003]: INFO : Ignition finished successfully Jul 2 00:02:24.894880 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:02:24.898187 systemd-networkd[759]: eth0: DHCPv6 lease lost Jul 2 00:02:24.899748 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:02:24.899852 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:02:24.903784 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:02:24.903882 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:02:24.907720 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:02:24.908255 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:02:24.908359 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:02:24.912575 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:02:24.912669 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:02:24.920204 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:02:24.920329 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:02:24.923235 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:02:24.923271 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:02:24.924572 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:02:24.924622 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:02:24.925914 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:02:24.925958 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:02:24.927485 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:02:24.927539 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:02:24.928897 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:02:24.928940 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:02:24.939628 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:02:24.940289 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:02:24.940355 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:02:24.941839 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:02:24.941883 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:02:24.943188 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:02:24.943227 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:02:24.944882 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:02:24.944920 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:02:24.946427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:02:24.963746 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:02:24.963911 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:02:24.965812 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:02:24.965853 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:02:24.967100 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:02:24.967133 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:02:24.968423 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:02:24.968468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:02:24.970540 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:02:24.970583 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:02:24.972868 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:02:24.972915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:02:24.984692 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:02:24.985461 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:02:24.985532 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:02:24.987176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:02:24.987218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:02:24.988973 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:02:24.989058 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:02:24.990291 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:02:24.990386 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:02:24.991629 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:02:24.991711 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:02:24.994256 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:02:24.995729 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:02:24.995791 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:02:24.997929 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:02:25.008848 systemd[1]: Switching root. Jul 2 00:02:25.036425 systemd-journald[237]: Journal stopped Jul 2 00:02:25.773076 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 2 00:02:25.773136 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:02:25.773148 kernel: SELinux: policy capability open_perms=1 Jul 2 00:02:25.773158 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:02:25.773167 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:02:25.773177 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:02:25.773190 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:02:25.773204 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:02:25.773214 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:02:25.773223 kernel: audit: type=1403 audit(1719878545.193:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:02:25.773234 systemd[1]: Successfully loaded SELinux policy in 30.772ms. Jul 2 00:02:25.773247 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.595ms. Jul 2 00:02:25.773280 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:02:25.773291 systemd[1]: Detected virtualization kvm. Jul 2 00:02:25.773312 systemd[1]: Detected architecture arm64. Jul 2 00:02:25.773328 systemd[1]: Detected first boot. Jul 2 00:02:25.773339 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:02:25.773353 zram_generator::config[1047]: No configuration found. Jul 2 00:02:25.773365 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:02:25.773375 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:02:25.773386 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:02:25.773396 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:02:25.773407 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:02:25.773419 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:02:25.773430 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:02:25.773443 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:02:25.773463 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:02:25.773477 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:02:25.773510 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:02:25.773547 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:02:25.773559 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:02:25.773570 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:02:25.773584 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:02:25.773595 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:02:25.773605 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:02:25.773619 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:02:25.773631 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 00:02:25.773642 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:02:25.773652 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:02:25.773662 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:02:25.773673 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:02:25.773685 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:02:25.773698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:02:25.773708 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:02:25.773719 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:02:25.773730 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:02:25.773741 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:02:25.773752 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:02:25.773763 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:02:25.773777 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:02:25.773788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:02:25.773800 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:02:25.773811 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:02:25.773821 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:02:25.773832 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:02:25.773843 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:02:25.773853 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:02:25.773863 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:02:25.773876 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:02:25.773887 systemd[1]: Reached target machines.target - Containers. Jul 2 00:02:25.773898 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:02:25.773909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:02:25.773920 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:02:25.773931 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:02:25.773942 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:02:25.773953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:02:25.773965 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:02:25.773976 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:02:25.773986 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:02:25.773999 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:02:25.774009 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:02:25.774019 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:02:25.774035 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:02:25.774046 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:02:25.774057 kernel: fuse: init (API version 7.39) Jul 2 00:02:25.774067 kernel: loop: module loaded Jul 2 00:02:25.774077 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:02:25.774088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:02:25.774098 kernel: ACPI: bus type drm_connector registered Jul 2 00:02:25.774108 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:02:25.774118 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:02:25.774129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:02:25.774139 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:02:25.774149 systemd[1]: Stopped verity-setup.service. Jul 2 00:02:25.774162 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:02:25.774172 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:02:25.774183 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:02:25.774193 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:02:25.774205 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:02:25.774215 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:02:25.774248 systemd-journald[1120]: Collecting audit messages is disabled. Jul 2 00:02:25.774270 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:02:25.774282 systemd-journald[1120]: Journal started Jul 2 00:02:25.774312 systemd-journald[1120]: Runtime Journal (/run/log/journal/ad002bf0835e411792846ff9bc2139d6) is 5.9M, max 47.3M, 41.4M free. Jul 2 00:02:25.568096 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:02:25.584635 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 00:02:25.585028 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:02:25.776003 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:02:25.776870 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:02:25.778232 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:02:25.778407 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:02:25.779660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:02:25.779808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:02:25.781013 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:02:25.781162 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:02:25.782335 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:02:25.783540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:02:25.784799 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:02:25.784935 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:02:25.786033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:02:25.786171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:02:25.787265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:02:25.789545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:02:25.791151 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:02:25.804673 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:02:25.813595 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:02:25.815840 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:02:25.817025 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:02:25.817062 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:02:25.819291 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:02:25.821329 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:02:25.823242 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:02:25.824156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:02:25.827224 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:02:25.829812 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:02:25.831080 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:02:25.834245 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:02:25.835641 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:02:25.839724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:02:25.846592 systemd-journald[1120]: Time spent on flushing to /var/log/journal/ad002bf0835e411792846ff9bc2139d6 is 13.926ms for 851 entries. Jul 2 00:02:25.846592 systemd-journald[1120]: System Journal (/var/log/journal/ad002bf0835e411792846ff9bc2139d6) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:02:26.210734 systemd-journald[1120]: Received client request to flush runtime journal. Jul 2 00:02:26.210794 kernel: loop0: detected capacity change from 0 to 113672 Jul 2 00:02:26.210815 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:02:26.210907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:02:26.210924 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 00:02:26.210942 kernel: loop2: detected capacity change from 0 to 194512 Jul 2 00:02:26.210958 kernel: loop3: detected capacity change from 0 to 113672 Jul 2 00:02:26.210974 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 00:02:26.210995 kernel: loop5: detected capacity change from 0 to 194512 Jul 2 00:02:25.844721 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:02:25.849416 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:02:25.852119 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:02:25.853325 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:02:25.854770 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:02:25.856047 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:02:25.876829 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:02:25.886512 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:02:25.890101 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:02:25.898151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:02:25.908874 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:02:25.926196 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jul 2 00:02:25.926206 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Jul 2 00:02:25.930002 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:02:25.936340 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:02:25.939779 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:02:25.950693 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:02:26.174275 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 00:02:26.174767 (sd-merge)[1177]: Merged extensions into '/usr'. Jul 2 00:02:26.180817 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:02:26.180828 systemd[1]: Reloading... Jul 2 00:02:26.225437 zram_generator::config[1198]: No configuration found. Jul 2 00:02:26.340738 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:02:26.364636 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:02:26.379582 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:02:26.380111 systemd[1]: Reloading finished in 198 ms. Jul 2 00:02:26.415833 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:02:26.418696 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:02:26.420207 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:02:26.421640 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:02:26.446937 systemd[1]: Starting ensure-sysext.service... Jul 2 00:02:26.449162 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:02:26.455403 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:02:26.455534 systemd[1]: Reloading... Jul 2 00:02:26.477124 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:02:26.477555 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:02:26.478428 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:02:26.479128 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 2 00:02:26.479196 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 2 00:02:26.481932 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:02:26.481946 systemd-tmpfiles[1240]: Skipping /boot Jul 2 00:02:26.491364 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:02:26.491379 systemd-tmpfiles[1240]: Skipping /boot Jul 2 00:02:26.506520 zram_generator::config[1265]: No configuration found. Jul 2 00:02:26.595761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:02:26.633902 systemd[1]: Reloading finished in 177 ms. Jul 2 00:02:26.651528 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:02:26.659941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:02:26.668766 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:02:26.671426 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:02:26.674050 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:02:26.679737 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:02:26.691900 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:02:26.698874 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:02:26.702860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:02:26.704768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:02:26.707324 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:02:26.715810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:02:26.716955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:02:26.721658 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:02:26.724925 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:02:26.726291 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Jul 2 00:02:26.727174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:02:26.727368 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:02:26.738145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:02:26.745040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:02:26.746420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:02:26.748213 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:02:26.751172 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:02:26.753179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:02:26.753359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:02:26.754849 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:02:26.758249 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:02:26.764509 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:02:26.768054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:02:26.775201 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:02:26.786784 systemd[1]: Finished ensure-sysext.service. Jul 2 00:02:26.789766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:02:26.793446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:02:26.798508 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1335) Jul 2 00:02:26.799060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:02:26.803291 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:02:26.804320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:02:26.807754 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:02:26.811641 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 00:02:26.812601 augenrules[1339]: No rules Jul 2 00:02:26.813132 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:02:26.814049 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:02:26.814228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:02:26.816051 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:02:26.817917 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:02:26.819211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:02:26.819349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:02:26.821721 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:02:26.821860 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:02:26.823065 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:02:26.823204 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:02:26.844595 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:02:26.844672 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:02:26.844935 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 00:02:26.861574 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1351) Jul 2 00:02:26.864146 systemd-resolved[1306]: Positive Trust Anchors: Jul 2 00:02:26.867605 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:02:26.867641 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:02:26.875878 systemd-resolved[1306]: Defaulting to hostname 'linux'. Jul 2 00:02:26.879826 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:02:26.883032 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 00:02:26.884588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:02:26.890699 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:02:26.906155 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:02:26.922734 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 00:02:26.923847 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:02:26.932014 systemd-networkd[1368]: lo: Link UP Jul 2 00:02:26.932301 systemd-networkd[1368]: lo: Gained carrier Jul 2 00:02:26.933144 systemd-networkd[1368]: Enumeration completed Jul 2 00:02:26.933336 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:02:26.934778 systemd[1]: Reached target network.target - Network. Jul 2 00:02:26.936016 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:02:26.936109 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:02:26.937291 systemd-networkd[1368]: eth0: Link UP Jul 2 00:02:26.937382 systemd-networkd[1368]: eth0: Gained carrier Jul 2 00:02:26.937436 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:02:26.943773 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:02:26.945855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:02:26.956771 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:02:26.959474 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:02:26.986599 lvm[1392]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:02:26.986642 systemd-networkd[1368]: eth0: DHCPv4 address 10.0.0.44/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 00:02:26.989619 systemd-timesyncd[1370]: Network configuration changed, trying to establish connection. Jul 2 00:02:27.450420 systemd-timesyncd[1370]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 00:02:27.450480 systemd-timesyncd[1370]: Initial clock synchronization to Tue 2024-07-02 00:02:27.450190 UTC. Jul 2 00:02:27.450964 systemd-resolved[1306]: Clock change detected. Flushing caches. Jul 2 00:02:27.460856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:02:27.480716 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:02:27.482253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:02:27.485255 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:02:27.486789 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:02:27.488085 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:02:27.489579 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:02:27.490758 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:02:27.492011 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:02:27.493257 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:02:27.493295 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:02:27.494166 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:02:27.495916 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:02:27.498442 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:02:27.508310 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:02:27.510879 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:02:27.512400 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:02:27.513351 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:02:27.514068 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:02:27.514854 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:02:27.514891 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:02:27.515885 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:02:27.517743 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:02:27.519545 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:02:27.521689 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:02:27.526387 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:02:27.532328 jq[1403]: false Jul 2 00:02:27.530237 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:02:27.531314 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:02:27.537718 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:02:27.540641 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:02:27.548005 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:02:27.557517 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:02:27.560332 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:02:27.560831 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:02:27.562391 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:02:27.563563 extend-filesystems[1404]: Found loop3 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found loop4 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found loop5 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda1 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda2 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda3 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found usr Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda4 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda6 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda7 Jul 2 00:02:27.565024 extend-filesystems[1404]: Found vda9 Jul 2 00:02:27.565024 extend-filesystems[1404]: Checking size of /dev/vda9 Jul 2 00:02:27.568596 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:02:27.570336 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:02:27.576413 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:02:27.583566 jq[1421]: true Jul 2 00:02:27.576594 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:02:27.576855 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:02:27.577005 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:02:27.582695 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:02:27.582916 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:02:27.584978 dbus-daemon[1402]: [system] SELinux support is enabled Jul 2 00:02:27.585363 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:02:27.601057 (ntainerd)[1426]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:02:27.602468 extend-filesystems[1404]: Resized partition /dev/vda9 Jul 2 00:02:27.605082 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:02:27.606982 jq[1425]: true Jul 2 00:02:27.605123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:02:27.606819 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:02:27.606837 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:02:27.619158 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1351) Jul 2 00:02:27.624372 extend-filesystems[1438]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:02:27.627220 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 00:02:27.629791 tar[1424]: linux-arm64/helm Jul 2 00:02:27.646767 update_engine[1418]: I0702 00:02:27.645936 1418 main.cc:92] Flatcar Update Engine starting Jul 2 00:02:27.649179 update_engine[1418]: I0702 00:02:27.649123 1418 update_check_scheduler.cc:74] Next update check in 9m5s Jul 2 00:02:27.650304 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:02:27.659837 systemd-logind[1416]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:02:27.660558 systemd-logind[1416]: New seat seat0. Jul 2 00:02:27.673650 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:02:27.675355 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:02:27.698540 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 00:02:27.724697 extend-filesystems[1438]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 00:02:27.724697 extend-filesystems[1438]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:02:27.724697 extend-filesystems[1438]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 00:02:27.729360 extend-filesystems[1404]: Resized filesystem in /dev/vda9 Jul 2 00:02:27.730183 bash[1456]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:02:27.725210 locksmithd[1455]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:02:27.727790 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:02:27.727958 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:02:27.731485 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:02:27.736861 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:02:27.835641 containerd[1426]: time="2024-07-02T00:02:27.835549697Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:02:27.863162 containerd[1426]: time="2024-07-02T00:02:27.861637297Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:02:27.863162 containerd[1426]: time="2024-07-02T00:02:27.861688377Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863267 containerd[1426]: time="2024-07-02T00:02:27.863158897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863267 containerd[1426]: time="2024-07-02T00:02:27.863190857Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863436 containerd[1426]: time="2024-07-02T00:02:27.863400697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863436 containerd[1426]: time="2024-07-02T00:02:27.863433217Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:02:27.863526 containerd[1426]: time="2024-07-02T00:02:27.863509017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863589 containerd[1426]: time="2024-07-02T00:02:27.863571577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863616 containerd[1426]: time="2024-07-02T00:02:27.863588417Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863661 containerd[1426]: time="2024-07-02T00:02:27.863647177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863850 containerd[1426]: time="2024-07-02T00:02:27.863831777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863875 containerd[1426]: time="2024-07-02T00:02:27.863854057Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:02:27.863875 containerd[1426]: time="2024-07-02T00:02:27.863863977Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863987 containerd[1426]: time="2024-07-02T00:02:27.863965937Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:02:27.863987 containerd[1426]: time="2024-07-02T00:02:27.863984177Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:02:27.864054 containerd[1426]: time="2024-07-02T00:02:27.864037137Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:02:27.864088 containerd[1426]: time="2024-07-02T00:02:27.864053817Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:02:27.867183 containerd[1426]: time="2024-07-02T00:02:27.867152737Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:02:27.867229 containerd[1426]: time="2024-07-02T00:02:27.867190177Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:02:27.867229 containerd[1426]: time="2024-07-02T00:02:27.867203737Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:02:27.867229 containerd[1426]: time="2024-07-02T00:02:27.867235777Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:02:27.867229 containerd[1426]: time="2024-07-02T00:02:27.867251377Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:02:27.867229 containerd[1426]: time="2024-07-02T00:02:27.867267097Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:02:27.867229 containerd[1426]: time="2024-07-02T00:02:27.867280657Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:02:27.867457 containerd[1426]: time="2024-07-02T00:02:27.867417777Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:02:27.867457 containerd[1426]: time="2024-07-02T00:02:27.867435417Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:02:27.867491 containerd[1426]: time="2024-07-02T00:02:27.867455217Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:02:27.867491 containerd[1426]: time="2024-07-02T00:02:27.867470257Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:02:27.867491 containerd[1426]: time="2024-07-02T00:02:27.867484337Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.867541 containerd[1426]: time="2024-07-02T00:02:27.867500217Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.867541 containerd[1426]: time="2024-07-02T00:02:27.867513977Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.867541 containerd[1426]: time="2024-07-02T00:02:27.867526897Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.867587 containerd[1426]: time="2024-07-02T00:02:27.867540457Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.867587 containerd[1426]: time="2024-07-02T00:02:27.867553937Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.867587 containerd[1426]: time="2024-07-02T00:02:27.867565457Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.867587 containerd[1426]: time="2024-07-02T00:02:27.867576857Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:02:27.868081 containerd[1426]: time="2024-07-02T00:02:27.867671137Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:02:27.868081 containerd[1426]: time="2024-07-02T00:02:27.867895537Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:02:27.868081 containerd[1426]: time="2024-07-02T00:02:27.867920417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868081 containerd[1426]: time="2024-07-02T00:02:27.867934017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:02:27.868081 containerd[1426]: time="2024-07-02T00:02:27.867965817Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:02:27.868081 containerd[1426]: time="2024-07-02T00:02:27.868078937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868092897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868109297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868120577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868133857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868160617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868174057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868186457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868199177Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868330777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868348017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868368 containerd[1426]: time="2024-07-02T00:02:27.868360257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868550 containerd[1426]: time="2024-07-02T00:02:27.868375257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868550 containerd[1426]: time="2024-07-02T00:02:27.868388257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868550 containerd[1426]: time="2024-07-02T00:02:27.868402737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868550 containerd[1426]: time="2024-07-02T00:02:27.868415337Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.868550 containerd[1426]: time="2024-07-02T00:02:27.868425857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:02:27.869500 containerd[1426]: time="2024-07-02T00:02:27.868801417Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:02:27.869500 containerd[1426]: time="2024-07-02T00:02:27.868869697Z" level=info msg="Connect containerd service" Jul 2 00:02:27.869500 containerd[1426]: time="2024-07-02T00:02:27.868899377Z" level=info msg="using legacy CRI server" Jul 2 00:02:27.869500 containerd[1426]: time="2024-07-02T00:02:27.868906497Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:02:27.869500 containerd[1426]: time="2024-07-02T00:02:27.869077337Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:02:27.869791 containerd[1426]: time="2024-07-02T00:02:27.869721377Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:02:27.869791 containerd[1426]: time="2024-07-02T00:02:27.869769857Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:02:27.869791 containerd[1426]: time="2024-07-02T00:02:27.869787897Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:02:27.869860 containerd[1426]: time="2024-07-02T00:02:27.869800057Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:02:27.869860 containerd[1426]: time="2024-07-02T00:02:27.869811537Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:02:27.871092 containerd[1426]: time="2024-07-02T00:02:27.870191017Z" level=info msg="Start subscribing containerd event" Jul 2 00:02:27.871092 containerd[1426]: time="2024-07-02T00:02:27.870597777Z" level=info msg="Start recovering state" Jul 2 00:02:27.871092 containerd[1426]: time="2024-07-02T00:02:27.870683577Z" level=info msg="Start event monitor" Jul 2 00:02:27.871092 containerd[1426]: time="2024-07-02T00:02:27.870696457Z" level=info msg="Start snapshots syncer" Jul 2 00:02:27.871092 containerd[1426]: time="2024-07-02T00:02:27.871095857Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:02:27.871254 containerd[1426]: time="2024-07-02T00:02:27.871106057Z" level=info msg="Start streaming server" Jul 2 00:02:27.871916 containerd[1426]: time="2024-07-02T00:02:27.871878697Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:02:27.871972 containerd[1426]: time="2024-07-02T00:02:27.871954577Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:02:27.873505 containerd[1426]: time="2024-07-02T00:02:27.872062777Z" level=info msg="containerd successfully booted in 0.038897s" Jul 2 00:02:27.872109 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:02:28.015265 tar[1424]: linux-arm64/LICENSE Jul 2 00:02:28.016775 tar[1424]: linux-arm64/README.md Jul 2 00:02:28.029052 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:02:28.572297 systemd-networkd[1368]: eth0: Gained IPv6LL Jul 2 00:02:28.575435 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:02:28.579621 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:02:28.597498 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 00:02:28.600534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:02:28.603138 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:02:28.622570 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:02:28.628487 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 00:02:28.630216 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 00:02:28.631987 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:02:28.995578 sshd_keygen[1419]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:02:29.015307 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:02:29.028459 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:02:29.034193 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:02:29.035268 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:02:29.038081 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:02:29.052665 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:02:29.055906 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:02:29.058032 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 00:02:29.059513 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:02:29.192755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:29.194227 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:02:29.197532 (kubelet)[1515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:02:29.199421 systemd[1]: Startup finished in 613ms (kernel) + 4.463s (initrd) + 3.580s (userspace) = 8.658s. Jul 2 00:02:29.857228 kubelet[1515]: E0702 00:02:29.857108 1515 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:02:29.860065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:02:29.860229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:02:34.192006 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:02:34.193245 systemd[1]: Started sshd@0-10.0.0.44:22-10.0.0.1:58214.service - OpenSSH per-connection server daemon (10.0.0.1:58214). Jul 2 00:02:34.249854 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 58214 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:02:34.251935 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:34.271617 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:02:34.278442 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:02:34.280439 systemd-logind[1416]: New session 1 of user core. Jul 2 00:02:34.291503 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:02:34.293966 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:02:34.301753 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:34.385550 systemd[1534]: Queued start job for default target default.target. Jul 2 00:02:34.397276 systemd[1534]: Created slice app.slice - User Application Slice. Jul 2 00:02:34.397308 systemd[1534]: Reached target paths.target - Paths. Jul 2 00:02:34.397322 systemd[1534]: Reached target timers.target - Timers. Jul 2 00:02:34.398647 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:02:34.410007 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:02:34.410135 systemd[1534]: Reached target sockets.target - Sockets. Jul 2 00:02:34.410173 systemd[1534]: Reached target basic.target - Basic System. Jul 2 00:02:34.410216 systemd[1534]: Reached target default.target - Main User Target. Jul 2 00:02:34.410244 systemd[1534]: Startup finished in 101ms. Jul 2 00:02:34.410500 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:02:34.412099 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:02:34.478638 systemd[1]: Started sshd@1-10.0.0.44:22-10.0.0.1:58230.service - OpenSSH per-connection server daemon (10.0.0.1:58230). Jul 2 00:02:34.535284 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 58230 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:02:34.538699 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:34.542957 systemd-logind[1416]: New session 2 of user core. Jul 2 00:02:34.560346 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:02:34.614163 sshd[1545]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:34.622662 systemd[1]: sshd@1-10.0.0.44:22-10.0.0.1:58230.service: Deactivated successfully. Jul 2 00:02:34.625160 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:02:34.626909 systemd-logind[1416]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:02:34.628761 systemd[1]: Started sshd@2-10.0.0.44:22-10.0.0.1:58246.service - OpenSSH per-connection server daemon (10.0.0.1:58246). Jul 2 00:02:34.629547 systemd-logind[1416]: Removed session 2. Jul 2 00:02:34.667018 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 58246 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:02:34.668462 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:34.673127 systemd-logind[1416]: New session 3 of user core. Jul 2 00:02:34.686428 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:02:34.737956 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:34.755503 systemd[1]: sshd@2-10.0.0.44:22-10.0.0.1:58246.service: Deactivated successfully. Jul 2 00:02:34.757790 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:02:34.759284 systemd-logind[1416]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:02:34.768679 systemd[1]: Started sshd@3-10.0.0.44:22-10.0.0.1:58250.service - OpenSSH per-connection server daemon (10.0.0.1:58250). Jul 2 00:02:34.770400 systemd-logind[1416]: Removed session 3. Jul 2 00:02:34.802937 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 58250 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:02:34.804276 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:34.808910 systemd-logind[1416]: New session 4 of user core. Jul 2 00:02:34.815409 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:02:34.869001 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:34.881749 systemd[1]: sshd@3-10.0.0.44:22-10.0.0.1:58250.service: Deactivated successfully. Jul 2 00:02:34.884693 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:02:34.886029 systemd-logind[1416]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:02:34.887465 systemd[1]: Started sshd@4-10.0.0.44:22-10.0.0.1:58264.service - OpenSSH per-connection server daemon (10.0.0.1:58264). Jul 2 00:02:34.888303 systemd-logind[1416]: Removed session 4. Jul 2 00:02:34.937596 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 58264 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:02:34.939225 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:34.943532 systemd-logind[1416]: New session 5 of user core. Jul 2 00:02:34.950351 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:02:35.019007 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:02:35.021056 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:02:35.032979 sudo[1569]: pam_unix(sudo:session): session closed for user root Jul 2 00:02:35.036756 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:35.045655 systemd[1]: sshd@4-10.0.0.44:22-10.0.0.1:58264.service: Deactivated successfully. Jul 2 00:02:35.047393 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:02:35.050226 systemd-logind[1416]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:02:35.050881 systemd[1]: Started sshd@5-10.0.0.44:22-10.0.0.1:58274.service - OpenSSH per-connection server daemon (10.0.0.1:58274). Jul 2 00:02:35.051775 systemd-logind[1416]: Removed session 5. Jul 2 00:02:35.088424 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 58274 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:02:35.089825 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:35.094217 systemd-logind[1416]: New session 6 of user core. Jul 2 00:02:35.108363 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:02:35.160056 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:02:35.160406 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:02:35.163348 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 2 00:02:35.167876 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:02:35.168133 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:02:35.186432 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:02:35.187853 auditctl[1581]: No rules Jul 2 00:02:35.188171 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:02:35.190170 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:02:35.192416 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:02:35.217110 augenrules[1599]: No rules Jul 2 00:02:35.218456 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:02:35.219509 sudo[1577]: pam_unix(sudo:session): session closed for user root Jul 2 00:02:35.222140 sshd[1574]: pam_unix(sshd:session): session closed for user core Jul 2 00:02:35.239586 systemd[1]: sshd@5-10.0.0.44:22-10.0.0.1:58274.service: Deactivated successfully. Jul 2 00:02:35.241278 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:02:35.242466 systemd-logind[1416]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:02:35.243584 systemd[1]: Started sshd@6-10.0.0.44:22-10.0.0.1:58286.service - OpenSSH per-connection server daemon (10.0.0.1:58286). Jul 2 00:02:35.244279 systemd-logind[1416]: Removed session 6. Jul 2 00:02:35.283573 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 58286 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:02:35.284785 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:02:35.289277 systemd-logind[1416]: New session 7 of user core. Jul 2 00:02:35.300340 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:02:35.352819 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:02:35.353210 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:02:35.471410 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:02:35.471491 (dockerd)[1620]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:02:35.719630 dockerd[1620]: time="2024-07-02T00:02:35.719495857Z" level=info msg="Starting up" Jul 2 00:02:35.833855 dockerd[1620]: time="2024-07-02T00:02:35.833804257Z" level=info msg="Loading containers: start." Jul 2 00:02:35.919186 kernel: Initializing XFRM netlink socket Jul 2 00:02:35.995309 systemd-networkd[1368]: docker0: Link UP Jul 2 00:02:36.006129 dockerd[1620]: time="2024-07-02T00:02:36.006085377Z" level=info msg="Loading containers: done." Jul 2 00:02:36.078968 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3021456989-merged.mount: Deactivated successfully. Jul 2 00:02:36.087162 dockerd[1620]: time="2024-07-02T00:02:36.086485257Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:02:36.087162 dockerd[1620]: time="2024-07-02T00:02:36.086702177Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:02:36.087162 dockerd[1620]: time="2024-07-02T00:02:36.086830937Z" level=info msg="Daemon has completed initialization" Jul 2 00:02:36.115833 dockerd[1620]: time="2024-07-02T00:02:36.115756777Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:02:36.116477 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:02:36.766739 containerd[1426]: time="2024-07-02T00:02:36.766691017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 00:02:37.408104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1557255908.mount: Deactivated successfully. Jul 2 00:02:38.546715 containerd[1426]: time="2024-07-02T00:02:38.546662217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:38.547243 containerd[1426]: time="2024-07-02T00:02:38.547204657Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256349" Jul 2 00:02:38.548068 containerd[1426]: time="2024-07-02T00:02:38.548039057Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:38.551131 containerd[1426]: time="2024-07-02T00:02:38.551069097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:38.552429 containerd[1426]: time="2024-07-02T00:02:38.552256737Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 1.785518s" Jul 2 00:02:38.552429 containerd[1426]: time="2024-07-02T00:02:38.552301257Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 00:02:38.571876 containerd[1426]: time="2024-07-02T00:02:38.571840697Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 00:02:40.110511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:02:40.119456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:02:40.210610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:40.215215 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:02:40.257607 kubelet[1833]: E0702 00:02:40.257547 1833 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:02:40.262095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:02:40.262262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:02:40.289315 containerd[1426]: time="2024-07-02T00:02:40.289266137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:40.289887 containerd[1426]: time="2024-07-02T00:02:40.289848497Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228086" Jul 2 00:02:40.290620 containerd[1426]: time="2024-07-02T00:02:40.290595297Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:40.293586 containerd[1426]: time="2024-07-02T00:02:40.293528697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:40.294698 containerd[1426]: time="2024-07-02T00:02:40.294668057Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 1.72258892s" Jul 2 00:02:40.294749 containerd[1426]: time="2024-07-02T00:02:40.294703657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 00:02:40.315741 containerd[1426]: time="2024-07-02T00:02:40.315694297Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 00:02:41.433359 containerd[1426]: time="2024-07-02T00:02:41.433303497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:41.433825 containerd[1426]: time="2024-07-02T00:02:41.433796617Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578350" Jul 2 00:02:41.434647 containerd[1426]: time="2024-07-02T00:02:41.434589137Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:41.437407 containerd[1426]: time="2024-07-02T00:02:41.437368097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:41.438735 containerd[1426]: time="2024-07-02T00:02:41.438703777Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 1.12297344s" Jul 2 00:02:41.438735 containerd[1426]: time="2024-07-02T00:02:41.438734737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 00:02:41.458457 containerd[1426]: time="2024-07-02T00:02:41.458414097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 00:02:42.363566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount294091388.mount: Deactivated successfully. Jul 2 00:02:42.825623 containerd[1426]: time="2024-07-02T00:02:42.825499577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:42.827675 containerd[1426]: time="2024-07-02T00:02:42.827638217Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052712" Jul 2 00:02:42.828841 containerd[1426]: time="2024-07-02T00:02:42.828814657Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:42.830781 containerd[1426]: time="2024-07-02T00:02:42.830739417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:42.831498 containerd[1426]: time="2024-07-02T00:02:42.831455937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 1.3730002s" Jul 2 00:02:42.831498 containerd[1426]: time="2024-07-02T00:02:42.831496017Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 00:02:42.852007 containerd[1426]: time="2024-07-02T00:02:42.851960017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:02:43.453828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount228236265.mount: Deactivated successfully. Jul 2 00:02:44.084009 containerd[1426]: time="2024-07-02T00:02:44.083945497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:44.084757 containerd[1426]: time="2024-07-02T00:02:44.084701897Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jul 2 00:02:44.085604 containerd[1426]: time="2024-07-02T00:02:44.085571817Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:44.088533 containerd[1426]: time="2024-07-02T00:02:44.088480937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:44.090832 containerd[1426]: time="2024-07-02T00:02:44.090786897Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.23878032s" Jul 2 00:02:44.090832 containerd[1426]: time="2024-07-02T00:02:44.090832857Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 00:02:44.110350 containerd[1426]: time="2024-07-02T00:02:44.110304497Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:02:44.531028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3150782074.mount: Deactivated successfully. Jul 2 00:02:44.536130 containerd[1426]: time="2024-07-02T00:02:44.536082417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:44.537771 containerd[1426]: time="2024-07-02T00:02:44.537662337Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 00:02:44.542416 containerd[1426]: time="2024-07-02T00:02:44.542339257Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:44.545707 containerd[1426]: time="2024-07-02T00:02:44.545632737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:44.546435 containerd[1426]: time="2024-07-02T00:02:44.546406977Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 436.06572ms" Jul 2 00:02:44.546505 containerd[1426]: time="2024-07-02T00:02:44.546439657Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:02:44.568047 containerd[1426]: time="2024-07-02T00:02:44.567995177Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:02:45.061576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount56581231.mount: Deactivated successfully. Jul 2 00:02:46.334709 containerd[1426]: time="2024-07-02T00:02:46.334644737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:46.335229 containerd[1426]: time="2024-07-02T00:02:46.335190377Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 00:02:46.338170 containerd[1426]: time="2024-07-02T00:02:46.335924857Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:46.340687 containerd[1426]: time="2024-07-02T00:02:46.339379617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:02:46.340908 containerd[1426]: time="2024-07-02T00:02:46.340690457Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 1.7726334s" Jul 2 00:02:46.340964 containerd[1426]: time="2024-07-02T00:02:46.340908417Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:02:50.512754 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:02:50.522393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:02:50.631675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:50.636714 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:02:50.682798 kubelet[2059]: E0702 00:02:50.682733 2059 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:02:50.685306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:02:50.685436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:02:51.297422 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:51.311520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:02:51.331472 systemd[1]: Reloading requested from client PID 2074 ('systemctl') (unit session-7.scope)... Jul 2 00:02:51.331497 systemd[1]: Reloading... Jul 2 00:02:51.406200 zram_generator::config[2114]: No configuration found. Jul 2 00:02:51.535298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:02:51.591042 systemd[1]: Reloading finished in 258 ms. Jul 2 00:02:51.635744 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:02:51.635811 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:02:51.636035 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:51.638688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:02:51.737060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:51.741543 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:02:51.788424 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:02:51.788424 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:02:51.788424 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:02:51.788876 kubelet[2157]: I0702 00:02:51.788464 2157 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:02:52.242336 kubelet[2157]: I0702 00:02:52.242302 2157 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:02:52.242485 kubelet[2157]: I0702 00:02:52.242474 2157 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:02:52.242750 kubelet[2157]: I0702 00:02:52.242733 2157 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:02:52.266930 kubelet[2157]: E0702 00:02:52.266890 2157 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.44:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.267239 kubelet[2157]: I0702 00:02:52.267185 2157 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:02:52.275978 kubelet[2157]: I0702 00:02:52.275933 2157 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:02:52.277055 kubelet[2157]: I0702 00:02:52.277012 2157 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:02:52.278422 kubelet[2157]: I0702 00:02:52.277521 2157 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:02:52.278422 kubelet[2157]: I0702 00:02:52.277563 2157 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:02:52.278422 kubelet[2157]: I0702 00:02:52.277574 2157 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:02:52.278422 kubelet[2157]: I0702 00:02:52.277693 2157 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:02:52.281888 kubelet[2157]: I0702 00:02:52.281849 2157 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:02:52.281888 kubelet[2157]: I0702 00:02:52.281893 2157 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:02:52.282563 kubelet[2157]: I0702 00:02:52.281921 2157 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:02:52.282563 kubelet[2157]: I0702 00:02:52.281937 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:02:52.282563 kubelet[2157]: W0702 00:02:52.282487 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.282563 kubelet[2157]: E0702 00:02:52.282532 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.282766 kubelet[2157]: W0702 00:02:52.282566 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.282766 kubelet[2157]: E0702 00:02:52.282602 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.286120 kubelet[2157]: I0702 00:02:52.283494 2157 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:02:52.286120 kubelet[2157]: I0702 00:02:52.283962 2157 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:02:52.286120 kubelet[2157]: W0702 00:02:52.284066 2157 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:02:52.286120 kubelet[2157]: I0702 00:02:52.284870 2157 server.go:1256] "Started kubelet" Jul 2 00:02:52.286556 kubelet[2157]: I0702 00:02:52.286536 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:02:52.288214 kubelet[2157]: I0702 00:02:52.287358 2157 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:02:52.288214 kubelet[2157]: I0702 00:02:52.287510 2157 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:02:52.288214 kubelet[2157]: I0702 00:02:52.287613 2157 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:02:52.288214 kubelet[2157]: I0702 00:02:52.287678 2157 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:02:52.288214 kubelet[2157]: W0702 00:02:52.287974 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.288214 kubelet[2157]: E0702 00:02:52.288015 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.288214 kubelet[2157]: I0702 00:02:52.288180 2157 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:02:52.289046 kubelet[2157]: E0702 00:02:52.289008 2157 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3c72fdd7e8c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:02:52.284831937 +0000 UTC m=+0.539764561,LastTimestamp:2024-07-02 00:02:52.284831937 +0000 UTC m=+0.539764561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:02:52.289174 kubelet[2157]: I0702 00:02:52.289114 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:02:52.289334 kubelet[2157]: I0702 00:02:52.289308 2157 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:02:52.290132 kubelet[2157]: I0702 00:02:52.290107 2157 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:02:52.290243 kubelet[2157]: I0702 00:02:52.290221 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:02:52.290283 kubelet[2157]: E0702 00:02:52.290247 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="200ms" Jul 2 00:02:52.291025 kubelet[2157]: E0702 00:02:52.290999 2157 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:02:52.291475 kubelet[2157]: I0702 00:02:52.291452 2157 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:02:52.303733 kubelet[2157]: I0702 00:02:52.303632 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:02:52.303733 kubelet[2157]: I0702 00:02:52.303638 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:02:52.303733 kubelet[2157]: I0702 00:02:52.303664 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:02:52.303733 kubelet[2157]: I0702 00:02:52.303682 2157 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:02:52.305089 kubelet[2157]: I0702 00:02:52.305062 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:02:52.305089 kubelet[2157]: I0702 00:02:52.305089 2157 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:02:52.305209 kubelet[2157]: I0702 00:02:52.305106 2157 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:02:52.305209 kubelet[2157]: E0702 00:02:52.305204 2157 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:02:52.305992 kubelet[2157]: W0702 00:02:52.305879 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.305992 kubelet[2157]: E0702 00:02:52.305930 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:52.390910 kubelet[2157]: I0702 00:02:52.390871 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:02:52.391369 kubelet[2157]: E0702 00:02:52.391336 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jul 2 00:02:52.400187 kubelet[2157]: I0702 00:02:52.400140 2157 policy_none.go:49] "None policy: Start" Jul 2 00:02:52.400826 kubelet[2157]: I0702 00:02:52.400797 2157 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:02:52.400866 kubelet[2157]: I0702 00:02:52.400841 2157 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:02:52.406031 kubelet[2157]: E0702 00:02:52.405985 2157 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:02:52.414176 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:02:52.429559 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:02:52.440679 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:02:52.441938 kubelet[2157]: I0702 00:02:52.441916 2157 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:02:52.442255 kubelet[2157]: I0702 00:02:52.442239 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:02:52.443568 kubelet[2157]: E0702 00:02:52.443546 2157 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 00:02:52.491495 kubelet[2157]: E0702 00:02:52.491452 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="400ms" Jul 2 00:02:52.593723 kubelet[2157]: I0702 00:02:52.592996 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:02:52.593830 kubelet[2157]: E0702 00:02:52.593776 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jul 2 00:02:52.607072 kubelet[2157]: I0702 00:02:52.607043 2157 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:02:52.608173 kubelet[2157]: I0702 00:02:52.608052 2157 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:02:52.609187 kubelet[2157]: I0702 00:02:52.609021 2157 topology_manager.go:215] "Topology Admit Handler" podUID="9f235dc1ca0cba4f5c00caee2a01d94e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:02:52.613848 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 00:02:52.631581 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 00:02:52.635328 systemd[1]: Created slice kubepods-burstable-pod9f235dc1ca0cba4f5c00caee2a01d94e.slice - libcontainer container kubepods-burstable-pod9f235dc1ca0cba4f5c00caee2a01d94e.slice. Jul 2 00:02:52.690280 kubelet[2157]: I0702 00:02:52.690238 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:02:52.690280 kubelet[2157]: I0702 00:02:52.690283 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:52.690447 kubelet[2157]: I0702 00:02:52.690307 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:52.690447 kubelet[2157]: I0702 00:02:52.690330 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:52.690447 kubelet[2157]: I0702 00:02:52.690349 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f235dc1ca0cba4f5c00caee2a01d94e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f235dc1ca0cba4f5c00caee2a01d94e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:02:52.690447 kubelet[2157]: I0702 00:02:52.690369 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f235dc1ca0cba4f5c00caee2a01d94e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9f235dc1ca0cba4f5c00caee2a01d94e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:02:52.690447 kubelet[2157]: I0702 00:02:52.690388 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:52.690550 kubelet[2157]: I0702 00:02:52.690410 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:52.690550 kubelet[2157]: I0702 00:02:52.690431 2157 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f235dc1ca0cba4f5c00caee2a01d94e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f235dc1ca0cba4f5c00caee2a01d94e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:02:52.892772 kubelet[2157]: E0702 00:02:52.892643 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="800ms" Jul 2 00:02:52.931870 kubelet[2157]: E0702 00:02:52.931808 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:52.932485 containerd[1426]: time="2024-07-02T00:02:52.932449617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 00:02:52.933656 kubelet[2157]: E0702 00:02:52.933637 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:52.934134 containerd[1426]: time="2024-07-02T00:02:52.933919017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 00:02:52.937821 kubelet[2157]: E0702 00:02:52.937617 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:52.938014 containerd[1426]: time="2024-07-02T00:02:52.937974377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9f235dc1ca0cba4f5c00caee2a01d94e,Namespace:kube-system,Attempt:0,}" Jul 2 00:02:53.000548 kubelet[2157]: I0702 00:02:53.000509 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:02:53.000864 kubelet[2157]: E0702 00:02:53.000835 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jul 2 00:02:53.184266 kubelet[2157]: W0702 00:02:53.184108 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.184266 kubelet[2157]: E0702 00:02:53.184186 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.44:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.336011 kubelet[2157]: W0702 00:02:53.335919 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.336011 kubelet[2157]: E0702 00:02:53.335986 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.44:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.388870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776363611.mount: Deactivated successfully. Jul 2 00:02:53.396049 containerd[1426]: time="2024-07-02T00:02:53.395997857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:02:53.399619 containerd[1426]: time="2024-07-02T00:02:53.399473897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:02:53.400848 containerd[1426]: time="2024-07-02T00:02:53.400806657Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:02:53.401618 containerd[1426]: time="2024-07-02T00:02:53.401569457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 00:02:53.402287 containerd[1426]: time="2024-07-02T00:02:53.402256857Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:02:53.404242 containerd[1426]: time="2024-07-02T00:02:53.403311817Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:02:53.405730 containerd[1426]: time="2024-07-02T00:02:53.405465697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:02:53.407201 containerd[1426]: time="2024-07-02T00:02:53.407164737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:02:53.408357 containerd[1426]: time="2024-07-02T00:02:53.408321497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.43456ms" Jul 2 00:02:53.409889 containerd[1426]: time="2024-07-02T00:02:53.409844817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.8506ms" Jul 2 00:02:53.414622 containerd[1426]: time="2024-07-02T00:02:53.414565137Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.51664ms" Jul 2 00:02:53.521388 kubelet[2157]: E0702 00:02:53.521274 2157 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.44:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.44:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de3c72fdd7e8c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 00:02:52.284831937 +0000 UTC m=+0.539764561,LastTimestamp:2024-07-02 00:02:52.284831937 +0000 UTC m=+0.539764561,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 00:02:53.575892 containerd[1426]: time="2024-07-02T00:02:53.575637657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:53.575892 containerd[1426]: time="2024-07-02T00:02:53.575766297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:53.576452 containerd[1426]: time="2024-07-02T00:02:53.576373657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:53.576452 containerd[1426]: time="2024-07-02T00:02:53.576421217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:53.576617 containerd[1426]: time="2024-07-02T00:02:53.576448017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:53.576617 containerd[1426]: time="2024-07-02T00:02:53.576462777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:53.576870 containerd[1426]: time="2024-07-02T00:02:53.575793777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:53.577002 containerd[1426]: time="2024-07-02T00:02:53.576949417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:53.579508 containerd[1426]: time="2024-07-02T00:02:53.579138017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:02:53.579508 containerd[1426]: time="2024-07-02T00:02:53.579206097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:53.579508 containerd[1426]: time="2024-07-02T00:02:53.579219657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:02:53.579508 containerd[1426]: time="2024-07-02T00:02:53.579228497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:02:53.598348 systemd[1]: Started cri-containerd-bc23db844c70a6a7991d7f7e9f986cf37fa36764b573c962d18bfa7ff5cb2828.scope - libcontainer container bc23db844c70a6a7991d7f7e9f986cf37fa36764b573c962d18bfa7ff5cb2828. Jul 2 00:02:53.599553 systemd[1]: Started cri-containerd-fb81dd41cbeb4e5aa6550301d4128cac0b8a6f0b84d6b5358827565f35f3dd2a.scope - libcontainer container fb81dd41cbeb4e5aa6550301d4128cac0b8a6f0b84d6b5358827565f35f3dd2a. Jul 2 00:02:53.603332 systemd[1]: Started cri-containerd-33257c1c34234077cd48799bea57afca00680fbba9a25977bfd534587b6e6f98.scope - libcontainer container 33257c1c34234077cd48799bea57afca00680fbba9a25977bfd534587b6e6f98. Jul 2 00:02:53.634915 containerd[1426]: time="2024-07-02T00:02:53.634759937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9f235dc1ca0cba4f5c00caee2a01d94e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc23db844c70a6a7991d7f7e9f986cf37fa36764b573c962d18bfa7ff5cb2828\"" Jul 2 00:02:53.636551 kubelet[2157]: E0702 00:02:53.636516 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:53.645109 containerd[1426]: time="2024-07-02T00:02:53.644867337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb81dd41cbeb4e5aa6550301d4128cac0b8a6f0b84d6b5358827565f35f3dd2a\"" Jul 2 00:02:53.645839 kubelet[2157]: E0702 00:02:53.645740 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:53.647226 containerd[1426]: time="2024-07-02T00:02:53.646595057Z" level=info msg="CreateContainer within sandbox \"bc23db844c70a6a7991d7f7e9f986cf37fa36764b573c962d18bfa7ff5cb2828\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:02:53.647226 containerd[1426]: time="2024-07-02T00:02:53.646794697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"33257c1c34234077cd48799bea57afca00680fbba9a25977bfd534587b6e6f98\"" Jul 2 00:02:53.647438 kubelet[2157]: E0702 00:02:53.647420 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:53.649102 containerd[1426]: time="2024-07-02T00:02:53.649075497Z" level=info msg="CreateContainer within sandbox \"33257c1c34234077cd48799bea57afca00680fbba9a25977bfd534587b6e6f98\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:02:53.649633 containerd[1426]: time="2024-07-02T00:02:53.649607657Z" level=info msg="CreateContainer within sandbox \"fb81dd41cbeb4e5aa6550301d4128cac0b8a6f0b84d6b5358827565f35f3dd2a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:02:53.675513 kubelet[2157]: W0702 00:02:53.675439 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.675513 kubelet[2157]: E0702 00:02:53.675512 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.44:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.694031 kubelet[2157]: E0702 00:02:53.693995 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.44:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.44:6443: connect: connection refused" interval="1.6s" Jul 2 00:02:53.753625 kubelet[2157]: W0702 00:02:53.753534 2157 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.753625 kubelet[2157]: E0702 00:02:53.753598 2157 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.44:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.44:6443: connect: connection refused Jul 2 00:02:53.797516 containerd[1426]: time="2024-07-02T00:02:53.797388657Z" level=info msg="CreateContainer within sandbox \"bc23db844c70a6a7991d7f7e9f986cf37fa36764b573c962d18bfa7ff5cb2828\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e557dd72d4004f2299939452124e02d7f87552cc5823d855d59f022308602bc5\"" Jul 2 00:02:53.798476 containerd[1426]: time="2024-07-02T00:02:53.798445057Z" level=info msg="StartContainer for \"e557dd72d4004f2299939452124e02d7f87552cc5823d855d59f022308602bc5\"" Jul 2 00:02:53.803170 containerd[1426]: time="2024-07-02T00:02:53.803054817Z" level=info msg="CreateContainer within sandbox \"fb81dd41cbeb4e5aa6550301d4128cac0b8a6f0b84d6b5358827565f35f3dd2a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8de94d41749714890e1e2b33cb28ab0eda6be6f9cc131e9ebcdfe1834d189ca1\"" Jul 2 00:02:53.803280 kubelet[2157]: I0702 00:02:53.803075 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:02:53.803973 kubelet[2157]: E0702 00:02:53.803543 2157 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.44:6443/api/v1/nodes\": dial tcp 10.0.0.44:6443: connect: connection refused" node="localhost" Jul 2 00:02:53.804016 containerd[1426]: time="2024-07-02T00:02:53.803426497Z" level=info msg="StartContainer for \"8de94d41749714890e1e2b33cb28ab0eda6be6f9cc131e9ebcdfe1834d189ca1\"" Jul 2 00:02:53.815744 containerd[1426]: time="2024-07-02T00:02:53.815700017Z" level=info msg="CreateContainer within sandbox \"33257c1c34234077cd48799bea57afca00680fbba9a25977bfd534587b6e6f98\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d76293a24d7ade398ec8ab5b426cc13aac23a8f0ddfce7195df5276a81e80b00\"" Jul 2 00:02:53.817189 containerd[1426]: time="2024-07-02T00:02:53.816344657Z" level=info msg="StartContainer for \"d76293a24d7ade398ec8ab5b426cc13aac23a8f0ddfce7195df5276a81e80b00\"" Jul 2 00:02:53.823344 systemd[1]: Started cri-containerd-e557dd72d4004f2299939452124e02d7f87552cc5823d855d59f022308602bc5.scope - libcontainer container e557dd72d4004f2299939452124e02d7f87552cc5823d855d59f022308602bc5. Jul 2 00:02:53.836311 systemd[1]: Started cri-containerd-8de94d41749714890e1e2b33cb28ab0eda6be6f9cc131e9ebcdfe1834d189ca1.scope - libcontainer container 8de94d41749714890e1e2b33cb28ab0eda6be6f9cc131e9ebcdfe1834d189ca1. Jul 2 00:02:53.840915 systemd[1]: Started cri-containerd-d76293a24d7ade398ec8ab5b426cc13aac23a8f0ddfce7195df5276a81e80b00.scope - libcontainer container d76293a24d7ade398ec8ab5b426cc13aac23a8f0ddfce7195df5276a81e80b00. Jul 2 00:02:53.874744 containerd[1426]: time="2024-07-02T00:02:53.874701657Z" level=info msg="StartContainer for \"e557dd72d4004f2299939452124e02d7f87552cc5823d855d59f022308602bc5\" returns successfully" Jul 2 00:02:53.905494 containerd[1426]: time="2024-07-02T00:02:53.905320457Z" level=info msg="StartContainer for \"d76293a24d7ade398ec8ab5b426cc13aac23a8f0ddfce7195df5276a81e80b00\" returns successfully" Jul 2 00:02:53.905494 containerd[1426]: time="2024-07-02T00:02:53.905434737Z" level=info msg="StartContainer for \"8de94d41749714890e1e2b33cb28ab0eda6be6f9cc131e9ebcdfe1834d189ca1\" returns successfully" Jul 2 00:02:54.313703 kubelet[2157]: E0702 00:02:54.313667 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:54.317286 kubelet[2157]: E0702 00:02:54.315736 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:54.318887 kubelet[2157]: E0702 00:02:54.318820 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:55.318541 kubelet[2157]: E0702 00:02:55.318512 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:55.405971 kubelet[2157]: I0702 00:02:55.405534 2157 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:02:56.057580 kubelet[2157]: E0702 00:02:56.057545 2157 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 00:02:56.119232 kubelet[2157]: I0702 00:02:56.119175 2157 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:02:56.284219 kubelet[2157]: I0702 00:02:56.284180 2157 apiserver.go:52] "Watching apiserver" Jul 2 00:02:56.287842 kubelet[2157]: I0702 00:02:56.287816 2157 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:02:56.587038 kubelet[2157]: E0702 00:02:56.586934 2157 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 2 00:02:56.587468 kubelet[2157]: E0702 00:02:56.587256 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:58.949998 systemd[1]: Reloading requested from client PID 2438 ('systemctl') (unit session-7.scope)... Jul 2 00:02:58.950013 systemd[1]: Reloading... Jul 2 00:02:59.015195 zram_generator::config[2475]: No configuration found. Jul 2 00:02:59.099097 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:02:59.168100 systemd[1]: Reloading finished in 217 ms. Jul 2 00:02:59.211626 kubelet[2157]: I0702 00:02:59.211533 2157 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:02:59.211591 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:02:59.225338 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:02:59.225576 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:59.233415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:02:59.333021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:02:59.337411 (kubelet)[2517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:02:59.380311 kubelet[2517]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:02:59.380311 kubelet[2517]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:02:59.380311 kubelet[2517]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:02:59.380695 kubelet[2517]: I0702 00:02:59.380373 2517 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:02:59.387194 kubelet[2517]: I0702 00:02:59.386461 2517 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 00:02:59.387194 kubelet[2517]: I0702 00:02:59.386496 2517 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:02:59.387194 kubelet[2517]: I0702 00:02:59.386736 2517 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 00:02:59.389286 kubelet[2517]: I0702 00:02:59.389249 2517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:02:59.391333 kubelet[2517]: I0702 00:02:59.391295 2517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:02:59.398417 kubelet[2517]: I0702 00:02:59.398386 2517 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:02:59.398586 kubelet[2517]: I0702 00:02:59.398570 2517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:02:59.398770 kubelet[2517]: I0702 00:02:59.398747 2517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:02:59.398878 kubelet[2517]: I0702 00:02:59.398774 2517 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:02:59.398878 kubelet[2517]: I0702 00:02:59.398783 2517 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:02:59.398878 kubelet[2517]: I0702 00:02:59.398813 2517 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:02:59.398997 kubelet[2517]: I0702 00:02:59.398913 2517 kubelet.go:396] "Attempting to sync node with API server" Jul 2 00:02:59.398997 kubelet[2517]: I0702 00:02:59.398927 2517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:02:59.398997 kubelet[2517]: I0702 00:02:59.398948 2517 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:02:59.398997 kubelet[2517]: I0702 00:02:59.398957 2517 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:02:59.403692 kubelet[2517]: I0702 00:02:59.403651 2517 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:02:59.403867 kubelet[2517]: I0702 00:02:59.403848 2517 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:02:59.408698 kubelet[2517]: I0702 00:02:59.408639 2517 server.go:1256] "Started kubelet" Jul 2 00:02:59.411152 kubelet[2517]: I0702 00:02:59.409666 2517 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:02:59.411152 kubelet[2517]: I0702 00:02:59.409823 2517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:02:59.413128 kubelet[2517]: I0702 00:02:59.413092 2517 server.go:461] "Adding debug handlers to kubelet server" Jul 2 00:02:59.413380 kubelet[2517]: I0702 00:02:59.413357 2517 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:02:59.413788 kubelet[2517]: I0702 00:02:59.413640 2517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:02:59.417476 kubelet[2517]: I0702 00:02:59.417401 2517 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:02:59.419189 kubelet[2517]: I0702 00:02:59.417881 2517 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:02:59.419189 kubelet[2517]: I0702 00:02:59.418040 2517 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:02:59.434463 kubelet[2517]: I0702 00:02:59.434159 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:02:59.435996 kubelet[2517]: I0702 00:02:59.435930 2517 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:02:59.436105 kubelet[2517]: I0702 00:02:59.436092 2517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:02:59.443128 kubelet[2517]: I0702 00:02:59.442248 2517 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:02:59.448952 kubelet[2517]: E0702 00:02:59.448899 2517 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:02:59.449682 kubelet[2517]: I0702 00:02:59.449467 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:02:59.449682 kubelet[2517]: I0702 00:02:59.449500 2517 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:02:59.449682 kubelet[2517]: I0702 00:02:59.449520 2517 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 00:02:59.449682 kubelet[2517]: E0702 00:02:59.449603 2517 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:02:59.482844 kubelet[2517]: I0702 00:02:59.481756 2517 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:02:59.482844 kubelet[2517]: I0702 00:02:59.481819 2517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:02:59.482844 kubelet[2517]: I0702 00:02:59.481850 2517 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:02:59.482844 kubelet[2517]: I0702 00:02:59.482029 2517 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:02:59.482844 kubelet[2517]: I0702 00:02:59.482049 2517 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:02:59.482844 kubelet[2517]: I0702 00:02:59.482055 2517 policy_none.go:49] "None policy: Start" Jul 2 00:02:59.483163 kubelet[2517]: I0702 00:02:59.483129 2517 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:02:59.483163 kubelet[2517]: I0702 00:02:59.483166 2517 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:02:59.483755 kubelet[2517]: I0702 00:02:59.483732 2517 state_mem.go:75] "Updated machine memory state" Jul 2 00:02:59.489715 kubelet[2517]: I0702 00:02:59.489689 2517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:02:59.490090 kubelet[2517]: I0702 00:02:59.490024 2517 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:02:59.521792 kubelet[2517]: I0702 00:02:59.521513 2517 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 00:02:59.535357 kubelet[2517]: I0702 00:02:59.535306 2517 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 00:02:59.535605 kubelet[2517]: I0702 00:02:59.535431 2517 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 00:02:59.550482 kubelet[2517]: I0702 00:02:59.549893 2517 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 00:02:59.550482 kubelet[2517]: I0702 00:02:59.549998 2517 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 00:02:59.550482 kubelet[2517]: I0702 00:02:59.550054 2517 topology_manager.go:215] "Topology Admit Handler" podUID="9f235dc1ca0cba4f5c00caee2a01d94e" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 00:02:59.719911 kubelet[2517]: I0702 00:02:59.719871 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:59.720123 kubelet[2517]: I0702 00:02:59.720110 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 00:02:59.720538 kubelet[2517]: I0702 00:02:59.720230 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f235dc1ca0cba4f5c00caee2a01d94e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f235dc1ca0cba4f5c00caee2a01d94e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:02:59.720538 kubelet[2517]: I0702 00:02:59.720279 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f235dc1ca0cba4f5c00caee2a01d94e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9f235dc1ca0cba4f5c00caee2a01d94e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:02:59.720538 kubelet[2517]: I0702 00:02:59.720307 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:59.720538 kubelet[2517]: I0702 00:02:59.720330 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:59.720538 kubelet[2517]: I0702 00:02:59.720360 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:59.720696 kubelet[2517]: I0702 00:02:59.720387 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 00:02:59.720696 kubelet[2517]: I0702 00:02:59.720406 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f235dc1ca0cba4f5c00caee2a01d94e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f235dc1ca0cba4f5c00caee2a01d94e\") " pod="kube-system/kube-apiserver-localhost" Jul 2 00:02:59.856514 kubelet[2517]: E0702 00:02:59.856402 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:59.858060 kubelet[2517]: E0702 00:02:59.858021 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:02:59.858848 kubelet[2517]: E0702 00:02:59.858788 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:00.401074 kubelet[2517]: I0702 00:03:00.401013 2517 apiserver.go:52] "Watching apiserver" Jul 2 00:03:00.418981 kubelet[2517]: I0702 00:03:00.418925 2517 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:03:00.465520 kubelet[2517]: E0702 00:03:00.464167 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:00.468409 kubelet[2517]: E0702 00:03:00.468321 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:00.478762 kubelet[2517]: E0702 00:03:00.478705 2517 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 00:03:00.479238 kubelet[2517]: E0702 00:03:00.479209 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:00.509227 kubelet[2517]: I0702 00:03:00.509189 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.509122514 podStartE2EDuration="1.509122514s" podCreationTimestamp="2024-07-02 00:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:03:00.491883067 +0000 UTC m=+1.151068571" watchObservedRunningTime="2024-07-02 00:03:00.509122514 +0000 UTC m=+1.168308018" Jul 2 00:03:00.517109 kubelet[2517]: I0702 00:03:00.516928 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.516867975 podStartE2EDuration="1.516867975s" podCreationTimestamp="2024-07-02 00:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:03:00.509348715 +0000 UTC m=+1.168534179" watchObservedRunningTime="2024-07-02 00:03:00.516867975 +0000 UTC m=+1.176053479" Jul 2 00:03:00.527728 kubelet[2517]: I0702 00:03:00.527482 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.527442244 podStartE2EDuration="1.527442244s" podCreationTimestamp="2024-07-02 00:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:03:00.517261496 +0000 UTC m=+1.176447000" watchObservedRunningTime="2024-07-02 00:03:00.527442244 +0000 UTC m=+1.186627748" Jul 2 00:03:01.466414 kubelet[2517]: E0702 00:03:01.466374 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:01.512274 kubelet[2517]: E0702 00:03:01.512236 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:03.283842 kubelet[2517]: E0702 00:03:03.283803 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:03.470342 kubelet[2517]: E0702 00:03:03.470315 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:03.712855 sudo[1610]: pam_unix(sudo:session): session closed for user root Jul 2 00:03:03.715382 sshd[1607]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:03.719606 systemd[1]: sshd@6-10.0.0.44:22-10.0.0.1:58286.service: Deactivated successfully. Jul 2 00:03:03.721605 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:03:03.723315 systemd[1]: session-7.scope: Consumed 7.031s CPU time, 138.0M memory peak, 0B memory swap peak. Jul 2 00:03:03.723857 systemd-logind[1416]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:03:03.724789 systemd-logind[1416]: Removed session 7. Jul 2 00:03:07.496491 kubelet[2517]: E0702 00:03:07.496446 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:08.477534 kubelet[2517]: E0702 00:03:08.477492 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:11.523697 kubelet[2517]: E0702 00:03:11.523615 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:12.550415 update_engine[1418]: I0702 00:03:12.550356 1418 update_attempter.cc:509] Updating boot flags... Jul 2 00:03:12.630188 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2613) Jul 2 00:03:12.662681 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2612) Jul 2 00:03:12.734337 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2617) Jul 2 00:03:12.780250 kubelet[2517]: I0702 00:03:12.780225 2517 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:03:12.780962 containerd[1426]: time="2024-07-02T00:03:12.780931341Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:03:12.781885 kubelet[2517]: I0702 00:03:12.781113 2517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:03:13.372747 kubelet[2517]: I0702 00:03:13.372695 2517 topology_manager.go:215] "Topology Admit Handler" podUID="9584e32e-f99c-4c01-9422-981b371df65c" podNamespace="kube-system" podName="kube-proxy-kl5bc" Jul 2 00:03:13.382459 systemd[1]: Created slice kubepods-besteffort-pod9584e32e_f99c_4c01_9422_981b371df65c.slice - libcontainer container kubepods-besteffort-pod9584e32e_f99c_4c01_9422_981b371df65c.slice. Jul 2 00:03:13.506277 kubelet[2517]: I0702 00:03:13.506219 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9584e32e-f99c-4c01-9422-981b371df65c-kube-proxy\") pod \"kube-proxy-kl5bc\" (UID: \"9584e32e-f99c-4c01-9422-981b371df65c\") " pod="kube-system/kube-proxy-kl5bc" Jul 2 00:03:13.506277 kubelet[2517]: I0702 00:03:13.506270 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9584e32e-f99c-4c01-9422-981b371df65c-xtables-lock\") pod \"kube-proxy-kl5bc\" (UID: \"9584e32e-f99c-4c01-9422-981b371df65c\") " pod="kube-system/kube-proxy-kl5bc" Jul 2 00:03:13.506277 kubelet[2517]: I0702 00:03:13.506292 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zntnr\" (UniqueName: \"kubernetes.io/projected/9584e32e-f99c-4c01-9422-981b371df65c-kube-api-access-zntnr\") pod \"kube-proxy-kl5bc\" (UID: \"9584e32e-f99c-4c01-9422-981b371df65c\") " pod="kube-system/kube-proxy-kl5bc" Jul 2 00:03:13.506466 kubelet[2517]: I0702 00:03:13.506313 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9584e32e-f99c-4c01-9422-981b371df65c-lib-modules\") pod \"kube-proxy-kl5bc\" (UID: \"9584e32e-f99c-4c01-9422-981b371df65c\") " pod="kube-system/kube-proxy-kl5bc" Jul 2 00:03:13.690752 kubelet[2517]: E0702 00:03:13.690359 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:13.691122 containerd[1426]: time="2024-07-02T00:03:13.691063225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kl5bc,Uid:9584e32e-f99c-4c01-9422-981b371df65c,Namespace:kube-system,Attempt:0,}" Jul 2 00:03:13.720443 containerd[1426]: time="2024-07-02T00:03:13.720346380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:13.720443 containerd[1426]: time="2024-07-02T00:03:13.720415660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:13.720443 containerd[1426]: time="2024-07-02T00:03:13.720431860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:13.720443 containerd[1426]: time="2024-07-02T00:03:13.720442180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:13.741382 systemd[1]: Started cri-containerd-a1bed0543f7dc84904180a5eeebe1c8fa048df149ce2164f907795e36412c875.scope - libcontainer container a1bed0543f7dc84904180a5eeebe1c8fa048df149ce2164f907795e36412c875. Jul 2 00:03:13.775437 containerd[1426]: time="2024-07-02T00:03:13.774969764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kl5bc,Uid:9584e32e-f99c-4c01-9422-981b371df65c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1bed0543f7dc84904180a5eeebe1c8fa048df149ce2164f907795e36412c875\"" Jul 2 00:03:13.776703 kubelet[2517]: E0702 00:03:13.776656 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:13.784316 containerd[1426]: time="2024-07-02T00:03:13.784258374Z" level=info msg="CreateContainer within sandbox \"a1bed0543f7dc84904180a5eeebe1c8fa048df149ce2164f907795e36412c875\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:03:13.799695 kubelet[2517]: I0702 00:03:13.799423 2517 topology_manager.go:215] "Topology Admit Handler" podUID="67c2031b-1901-40da-8375-75c92d368440" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-8dv28" Jul 2 00:03:13.810475 containerd[1426]: time="2024-07-02T00:03:13.810305965Z" level=info msg="CreateContainer within sandbox \"a1bed0543f7dc84904180a5eeebe1c8fa048df149ce2164f907795e36412c875\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e58d1ed07c1f780a26d1b912529988271f190157f41b41fb5c7b61402224214f\"" Jul 2 00:03:13.811074 containerd[1426]: time="2024-07-02T00:03:13.811039086Z" level=info msg="StartContainer for \"e58d1ed07c1f780a26d1b912529988271f190157f41b41fb5c7b61402224214f\"" Jul 2 00:03:13.812788 kubelet[2517]: I0702 00:03:13.812747 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67c2031b-1901-40da-8375-75c92d368440-var-lib-calico\") pod \"tigera-operator-76c4974c85-8dv28\" (UID: \"67c2031b-1901-40da-8375-75c92d368440\") " pod="tigera-operator/tigera-operator-76c4974c85-8dv28" Jul 2 00:03:13.812788 kubelet[2517]: I0702 00:03:13.812788 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9km2d\" (UniqueName: \"kubernetes.io/projected/67c2031b-1901-40da-8375-75c92d368440-kube-api-access-9km2d\") pod \"tigera-operator-76c4974c85-8dv28\" (UID: \"67c2031b-1901-40da-8375-75c92d368440\") " pod="tigera-operator/tigera-operator-76c4974c85-8dv28" Jul 2 00:03:13.813426 systemd[1]: Created slice kubepods-besteffort-pod67c2031b_1901_40da_8375_75c92d368440.slice - libcontainer container kubepods-besteffort-pod67c2031b_1901_40da_8375_75c92d368440.slice. Jul 2 00:03:13.846355 systemd[1]: Started cri-containerd-e58d1ed07c1f780a26d1b912529988271f190157f41b41fb5c7b61402224214f.scope - libcontainer container e58d1ed07c1f780a26d1b912529988271f190157f41b41fb5c7b61402224214f. Jul 2 00:03:13.880556 containerd[1426]: time="2024-07-02T00:03:13.880498167Z" level=info msg="StartContainer for \"e58d1ed07c1f780a26d1b912529988271f190157f41b41fb5c7b61402224214f\" returns successfully" Jul 2 00:03:14.117552 containerd[1426]: time="2024-07-02T00:03:14.117501756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8dv28,Uid:67c2031b-1901-40da-8375-75c92d368440,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:03:14.138209 containerd[1426]: time="2024-07-02T00:03:14.138029699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:14.138209 containerd[1426]: time="2024-07-02T00:03:14.138093299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:14.138209 containerd[1426]: time="2024-07-02T00:03:14.138108379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:14.138209 containerd[1426]: time="2024-07-02T00:03:14.138131619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:14.158974 systemd[1]: Started cri-containerd-7ebb4ad8329f7a4dc882093b925f819063e22efbcbb54a06c1eb34c81389c2f6.scope - libcontainer container 7ebb4ad8329f7a4dc882093b925f819063e22efbcbb54a06c1eb34c81389c2f6. Jul 2 00:03:14.192372 containerd[1426]: time="2024-07-02T00:03:14.192316039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8dv28,Uid:67c2031b-1901-40da-8375-75c92d368440,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7ebb4ad8329f7a4dc882093b925f819063e22efbcbb54a06c1eb34c81389c2f6\"" Jul 2 00:03:14.194265 containerd[1426]: time="2024-07-02T00:03:14.194124521Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:03:14.490067 kubelet[2517]: E0702 00:03:14.489946 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:14.533213 kubelet[2517]: I0702 00:03:14.532898 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kl5bc" podStartSLOduration=1.532855573 podStartE2EDuration="1.532855573s" podCreationTimestamp="2024-07-02 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:03:14.532684053 +0000 UTC m=+15.191869557" watchObservedRunningTime="2024-07-02 00:03:14.532855573 +0000 UTC m=+15.192041037" Jul 2 00:03:15.358237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1994720288.mount: Deactivated successfully. Jul 2 00:03:16.437904 containerd[1426]: time="2024-07-02T00:03:16.437846739Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:16.441732 containerd[1426]: time="2024-07-02T00:03:16.441687463Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473626" Jul 2 00:03:16.442701 containerd[1426]: time="2024-07-02T00:03:16.442628104Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:16.446119 containerd[1426]: time="2024-07-02T00:03:16.446055267Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:16.446774 containerd[1426]: time="2024-07-02T00:03:16.446628468Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.252449307s" Jul 2 00:03:16.446774 containerd[1426]: time="2024-07-02T00:03:16.446672148Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 00:03:16.451646 containerd[1426]: time="2024-07-02T00:03:16.451019352Z" level=info msg="CreateContainer within sandbox \"7ebb4ad8329f7a4dc882093b925f819063e22efbcbb54a06c1eb34c81389c2f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:03:16.473106 containerd[1426]: time="2024-07-02T00:03:16.472988893Z" level=info msg="CreateContainer within sandbox \"7ebb4ad8329f7a4dc882093b925f819063e22efbcbb54a06c1eb34c81389c2f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b87113ed089a0d6ee0c8b6b6841d3333be08723d4dce539a1b31c3d5b22ec502\"" Jul 2 00:03:16.479969 containerd[1426]: time="2024-07-02T00:03:16.478701739Z" level=info msg="StartContainer for \"b87113ed089a0d6ee0c8b6b6841d3333be08723d4dce539a1b31c3d5b22ec502\"" Jul 2 00:03:16.520378 systemd[1]: Started cri-containerd-b87113ed089a0d6ee0c8b6b6841d3333be08723d4dce539a1b31c3d5b22ec502.scope - libcontainer container b87113ed089a0d6ee0c8b6b6841d3333be08723d4dce539a1b31c3d5b22ec502. Jul 2 00:03:16.582501 containerd[1426]: time="2024-07-02T00:03:16.582448239Z" level=info msg="StartContainer for \"b87113ed089a0d6ee0c8b6b6841d3333be08723d4dce539a1b31c3d5b22ec502\" returns successfully" Jul 2 00:03:17.512513 kubelet[2517]: I0702 00:03:17.511868 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-8dv28" podStartSLOduration=2.258410997 podStartE2EDuration="4.511823665s" podCreationTimestamp="2024-07-02 00:03:13 +0000 UTC" firstStartedPulling="2024-07-02 00:03:14.19356108 +0000 UTC m=+14.852746584" lastFinishedPulling="2024-07-02 00:03:16.446973748 +0000 UTC m=+17.106159252" observedRunningTime="2024-07-02 00:03:17.511787905 +0000 UTC m=+18.170973489" watchObservedRunningTime="2024-07-02 00:03:17.511823665 +0000 UTC m=+18.171009169" Jul 2 00:03:19.928225 kubelet[2517]: I0702 00:03:19.928181 2517 topology_manager.go:215] "Topology Admit Handler" podUID="993bdee5-6b33-4e35-98b1-9a9c9299205f" podNamespace="calico-system" podName="calico-typha-54f8948f9-pwdqd" Jul 2 00:03:19.936223 systemd[1]: Created slice kubepods-besteffort-pod993bdee5_6b33_4e35_98b1_9a9c9299205f.slice - libcontainer container kubepods-besteffort-pod993bdee5_6b33_4e35_98b1_9a9c9299205f.slice. Jul 2 00:03:19.953378 kubelet[2517]: I0702 00:03:19.953340 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/993bdee5-6b33-4e35-98b1-9a9c9299205f-typha-certs\") pod \"calico-typha-54f8948f9-pwdqd\" (UID: \"993bdee5-6b33-4e35-98b1-9a9c9299205f\") " pod="calico-system/calico-typha-54f8948f9-pwdqd" Jul 2 00:03:19.953378 kubelet[2517]: I0702 00:03:19.953386 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27qsw\" (UniqueName: \"kubernetes.io/projected/993bdee5-6b33-4e35-98b1-9a9c9299205f-kube-api-access-27qsw\") pod \"calico-typha-54f8948f9-pwdqd\" (UID: \"993bdee5-6b33-4e35-98b1-9a9c9299205f\") " pod="calico-system/calico-typha-54f8948f9-pwdqd" Jul 2 00:03:19.953554 kubelet[2517]: I0702 00:03:19.953413 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993bdee5-6b33-4e35-98b1-9a9c9299205f-tigera-ca-bundle\") pod \"calico-typha-54f8948f9-pwdqd\" (UID: \"993bdee5-6b33-4e35-98b1-9a9c9299205f\") " pod="calico-system/calico-typha-54f8948f9-pwdqd" Jul 2 00:03:19.980600 kubelet[2517]: I0702 00:03:19.980553 2517 topology_manager.go:215] "Topology Admit Handler" podUID="f5f2e1f0-3421-43f1-a4bb-277b77b6786e" podNamespace="calico-system" podName="calico-node-r5cjm" Jul 2 00:03:19.994228 systemd[1]: Created slice kubepods-besteffort-podf5f2e1f0_3421_43f1_a4bb_277b77b6786e.slice - libcontainer container kubepods-besteffort-podf5f2e1f0_3421_43f1_a4bb_277b77b6786e.slice. Jul 2 00:03:20.054641 kubelet[2517]: I0702 00:03:20.054569 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-net-dir\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.054641 kubelet[2517]: I0702 00:03:20.054621 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-tigera-ca-bundle\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.054833 kubelet[2517]: I0702 00:03:20.054732 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-log-dir\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.054833 kubelet[2517]: I0702 00:03:20.054772 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-lib-modules\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.054833 kubelet[2517]: I0702 00:03:20.054827 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-policysync\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.054922 kubelet[2517]: I0702 00:03:20.054867 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-lib-calico\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.054946 kubelet[2517]: I0702 00:03:20.054930 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-bin-dir\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.055463 kubelet[2517]: I0702 00:03:20.055095 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cftwp\" (UniqueName: \"kubernetes.io/projected/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-kube-api-access-cftwp\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.055807 kubelet[2517]: I0702 00:03:20.055620 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-run-calico\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.055807 kubelet[2517]: I0702 00:03:20.055665 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-xtables-lock\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.055807 kubelet[2517]: I0702 00:03:20.055687 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-node-certs\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.055807 kubelet[2517]: I0702 00:03:20.055710 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-flexvol-driver-host\") pod \"calico-node-r5cjm\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " pod="calico-system/calico-node-r5cjm" Jul 2 00:03:20.097239 kubelet[2517]: I0702 00:03:20.097187 2517 topology_manager.go:215] "Topology Admit Handler" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" podNamespace="calico-system" podName="csi-node-driver-q228g" Jul 2 00:03:20.098166 kubelet[2517]: E0702 00:03:20.097695 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:20.156366 kubelet[2517]: I0702 00:03:20.156317 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/deefda5b-5363-476d-b5c8-1f67ee1aea37-socket-dir\") pod \"csi-node-driver-q228g\" (UID: \"deefda5b-5363-476d-b5c8-1f67ee1aea37\") " pod="calico-system/csi-node-driver-q228g" Jul 2 00:03:20.156366 kubelet[2517]: I0702 00:03:20.156371 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/deefda5b-5363-476d-b5c8-1f67ee1aea37-registration-dir\") pod \"csi-node-driver-q228g\" (UID: \"deefda5b-5363-476d-b5c8-1f67ee1aea37\") " pod="calico-system/csi-node-driver-q228g" Jul 2 00:03:20.156546 kubelet[2517]: I0702 00:03:20.156449 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676k6\" (UniqueName: \"kubernetes.io/projected/deefda5b-5363-476d-b5c8-1f67ee1aea37-kube-api-access-676k6\") pod \"csi-node-driver-q228g\" (UID: \"deefda5b-5363-476d-b5c8-1f67ee1aea37\") " pod="calico-system/csi-node-driver-q228g" Jul 2 00:03:20.156546 kubelet[2517]: I0702 00:03:20.156502 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/deefda5b-5363-476d-b5c8-1f67ee1aea37-kubelet-dir\") pod \"csi-node-driver-q228g\" (UID: \"deefda5b-5363-476d-b5c8-1f67ee1aea37\") " pod="calico-system/csi-node-driver-q228g" Jul 2 00:03:20.156611 kubelet[2517]: I0702 00:03:20.156551 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/deefda5b-5363-476d-b5c8-1f67ee1aea37-varrun\") pod \"csi-node-driver-q228g\" (UID: \"deefda5b-5363-476d-b5c8-1f67ee1aea37\") " pod="calico-system/csi-node-driver-q228g" Jul 2 00:03:20.157542 kubelet[2517]: E0702 00:03:20.157510 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.157542 kubelet[2517]: W0702 00:03:20.157539 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.157734 kubelet[2517]: E0702 00:03:20.157563 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.157928 kubelet[2517]: E0702 00:03:20.157912 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.157966 kubelet[2517]: W0702 00:03:20.157930 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.157966 kubelet[2517]: E0702 00:03:20.157948 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.158408 kubelet[2517]: E0702 00:03:20.158365 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.158408 kubelet[2517]: W0702 00:03:20.158395 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.158510 kubelet[2517]: E0702 00:03:20.158419 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.159385 kubelet[2517]: E0702 00:03:20.159363 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.159385 kubelet[2517]: W0702 00:03:20.159384 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.159485 kubelet[2517]: E0702 00:03:20.159441 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.159674 kubelet[2517]: E0702 00:03:20.159656 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.159674 kubelet[2517]: W0702 00:03:20.159671 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.159743 kubelet[2517]: E0702 00:03:20.159701 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.159974 kubelet[2517]: E0702 00:03:20.159954 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.160012 kubelet[2517]: W0702 00:03:20.159974 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.160130 kubelet[2517]: E0702 00:03:20.160009 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.162291 kubelet[2517]: E0702 00:03:20.162251 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.162291 kubelet[2517]: W0702 00:03:20.162277 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.162437 kubelet[2517]: E0702 00:03:20.162331 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.162590 kubelet[2517]: E0702 00:03:20.162557 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.162590 kubelet[2517]: W0702 00:03:20.162573 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.162663 kubelet[2517]: E0702 00:03:20.162607 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.162847 kubelet[2517]: E0702 00:03:20.162817 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.162847 kubelet[2517]: W0702 00:03:20.162834 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.162934 kubelet[2517]: E0702 00:03:20.162866 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.163062 kubelet[2517]: E0702 00:03:20.163025 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.163062 kubelet[2517]: W0702 00:03:20.163039 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.163135 kubelet[2517]: E0702 00:03:20.163069 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.163310 kubelet[2517]: E0702 00:03:20.163291 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.163310 kubelet[2517]: W0702 00:03:20.163305 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.163394 kubelet[2517]: E0702 00:03:20.163343 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.163767 kubelet[2517]: E0702 00:03:20.163745 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.163767 kubelet[2517]: W0702 00:03:20.163765 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.163846 kubelet[2517]: E0702 00:03:20.163813 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.164016 kubelet[2517]: E0702 00:03:20.163999 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.164016 kubelet[2517]: W0702 00:03:20.164012 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.164098 kubelet[2517]: E0702 00:03:20.164029 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.164334 kubelet[2517]: E0702 00:03:20.164249 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.164334 kubelet[2517]: W0702 00:03:20.164267 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.164334 kubelet[2517]: E0702 00:03:20.164287 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.165595 kubelet[2517]: E0702 00:03:20.165565 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.165595 kubelet[2517]: W0702 00:03:20.165587 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.165726 kubelet[2517]: E0702 00:03:20.165607 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.167640 kubelet[2517]: E0702 00:03:20.167594 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.167640 kubelet[2517]: W0702 00:03:20.167632 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.167770 kubelet[2517]: E0702 00:03:20.167761 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.167923 kubelet[2517]: E0702 00:03:20.167898 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.167923 kubelet[2517]: W0702 00:03:20.167912 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.168095 kubelet[2517]: E0702 00:03:20.168074 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.168095 kubelet[2517]: W0702 00:03:20.168086 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.168245 kubelet[2517]: E0702 00:03:20.168234 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.168245 kubelet[2517]: W0702 00:03:20.168244 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.168442 kubelet[2517]: E0702 00:03:20.168428 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.168442 kubelet[2517]: W0702 00:03:20.168437 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.170315 kubelet[2517]: E0702 00:03:20.170284 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.170315 kubelet[2517]: W0702 00:03:20.170307 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.170389 kubelet[2517]: E0702 00:03:20.170328 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.170389 kubelet[2517]: E0702 00:03:20.170366 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.170581 kubelet[2517]: E0702 00:03:20.170567 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.170581 kubelet[2517]: W0702 00:03:20.170578 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.170654 kubelet[2517]: E0702 00:03:20.170589 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.170840 kubelet[2517]: E0702 00:03:20.170808 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.170840 kubelet[2517]: W0702 00:03:20.170826 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.170840 kubelet[2517]: E0702 00:03:20.170839 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.173969 kubelet[2517]: E0702 00:03:20.172111 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.173969 kubelet[2517]: W0702 00:03:20.172131 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.173969 kubelet[2517]: E0702 00:03:20.172169 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.173969 kubelet[2517]: E0702 00:03:20.172199 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.173969 kubelet[2517]: E0702 00:03:20.172383 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.173969 kubelet[2517]: W0702 00:03:20.172391 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.173969 kubelet[2517]: E0702 00:03:20.172403 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.173969 kubelet[2517]: E0702 00:03:20.172424 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.173969 kubelet[2517]: E0702 00:03:20.172433 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.175367 kubelet[2517]: E0702 00:03:20.174325 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.175367 kubelet[2517]: W0702 00:03:20.174375 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.175367 kubelet[2517]: E0702 00:03:20.174399 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.175367 kubelet[2517]: E0702 00:03:20.174757 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.175367 kubelet[2517]: W0702 00:03:20.174767 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.175367 kubelet[2517]: E0702 00:03:20.174788 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.175367 kubelet[2517]: E0702 00:03:20.175089 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.175367 kubelet[2517]: W0702 00:03:20.175099 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.175367 kubelet[2517]: E0702 00:03:20.175121 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.175367 kubelet[2517]: E0702 00:03:20.175350 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.176917 kubelet[2517]: W0702 00:03:20.175359 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.176917 kubelet[2517]: E0702 00:03:20.175371 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.176917 kubelet[2517]: E0702 00:03:20.175666 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.176917 kubelet[2517]: W0702 00:03:20.175677 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.176917 kubelet[2517]: E0702 00:03:20.175690 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.176917 kubelet[2517]: E0702 00:03:20.175950 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.176917 kubelet[2517]: W0702 00:03:20.175959 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.176917 kubelet[2517]: E0702 00:03:20.175971 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.177339 kubelet[2517]: E0702 00:03:20.177317 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.177339 kubelet[2517]: W0702 00:03:20.177339 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.177442 kubelet[2517]: E0702 00:03:20.177355 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.247397 kubelet[2517]: E0702 00:03:20.247360 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:20.248955 containerd[1426]: time="2024-07-02T00:03:20.248916938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f8948f9-pwdqd,Uid:993bdee5-6b33-4e35-98b1-9a9c9299205f,Namespace:calico-system,Attempt:0,}" Jul 2 00:03:20.257767 kubelet[2517]: E0702 00:03:20.257732 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.257767 kubelet[2517]: W0702 00:03:20.257753 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.257767 kubelet[2517]: E0702 00:03:20.257775 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.258067 kubelet[2517]: E0702 00:03:20.258042 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.258067 kubelet[2517]: W0702 00:03:20.258056 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.258188 kubelet[2517]: E0702 00:03:20.258079 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.258358 kubelet[2517]: E0702 00:03:20.258335 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.258358 kubelet[2517]: W0702 00:03:20.258347 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.258412 kubelet[2517]: E0702 00:03:20.258369 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.258576 kubelet[2517]: E0702 00:03:20.258556 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.258576 kubelet[2517]: W0702 00:03:20.258569 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.258637 kubelet[2517]: E0702 00:03:20.258584 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.258807 kubelet[2517]: E0702 00:03:20.258784 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.258807 kubelet[2517]: W0702 00:03:20.258806 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.258861 kubelet[2517]: E0702 00:03:20.258825 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.258998 kubelet[2517]: E0702 00:03:20.258987 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.259022 kubelet[2517]: W0702 00:03:20.258998 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.259022 kubelet[2517]: E0702 00:03:20.259013 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.259209 kubelet[2517]: E0702 00:03:20.259196 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.259209 kubelet[2517]: W0702 00:03:20.259207 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.259290 kubelet[2517]: E0702 00:03:20.259233 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.259461 kubelet[2517]: E0702 00:03:20.259448 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.259461 kubelet[2517]: W0702 00:03:20.259460 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.259511 kubelet[2517]: E0702 00:03:20.259475 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.259623 kubelet[2517]: E0702 00:03:20.259614 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.259645 kubelet[2517]: W0702 00:03:20.259623 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.259697 kubelet[2517]: E0702 00:03:20.259674 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.259863 kubelet[2517]: E0702 00:03:20.259848 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.259863 kubelet[2517]: W0702 00:03:20.259860 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.259923 kubelet[2517]: E0702 00:03:20.259909 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.260048 kubelet[2517]: E0702 00:03:20.260035 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.260048 kubelet[2517]: W0702 00:03:20.260046 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.260101 kubelet[2517]: E0702 00:03:20.260092 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.260219 kubelet[2517]: E0702 00:03:20.260206 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.260219 kubelet[2517]: W0702 00:03:20.260219 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.260275 kubelet[2517]: E0702 00:03:20.260261 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.260388 kubelet[2517]: E0702 00:03:20.260375 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.260416 kubelet[2517]: W0702 00:03:20.260387 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.260416 kubelet[2517]: E0702 00:03:20.260407 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.260634 kubelet[2517]: E0702 00:03:20.260623 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.260657 kubelet[2517]: W0702 00:03:20.260634 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.260657 kubelet[2517]: E0702 00:03:20.260651 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.260845 kubelet[2517]: E0702 00:03:20.260834 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.260845 kubelet[2517]: W0702 00:03:20.260844 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.260905 kubelet[2517]: E0702 00:03:20.260859 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.261019 kubelet[2517]: E0702 00:03:20.261009 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.261050 kubelet[2517]: W0702 00:03:20.261020 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.261050 kubelet[2517]: E0702 00:03:20.261035 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.261204 kubelet[2517]: E0702 00:03:20.261195 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.261235 kubelet[2517]: W0702 00:03:20.261204 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.261235 kubelet[2517]: E0702 00:03:20.261218 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.261470 kubelet[2517]: E0702 00:03:20.261454 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.261470 kubelet[2517]: W0702 00:03:20.261469 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.261527 kubelet[2517]: E0702 00:03:20.261487 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.261703 kubelet[2517]: E0702 00:03:20.261691 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.261729 kubelet[2517]: W0702 00:03:20.261702 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.262610 kubelet[2517]: E0702 00:03:20.262579 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.262878 kubelet[2517]: E0702 00:03:20.262863 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.262918 kubelet[2517]: W0702 00:03:20.262878 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.263358 kubelet[2517]: E0702 00:03:20.263337 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.263639 kubelet[2517]: E0702 00:03:20.263622 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.263639 kubelet[2517]: W0702 00:03:20.263637 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.263699 kubelet[2517]: E0702 00:03:20.263657 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.263886 kubelet[2517]: E0702 00:03:20.263868 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.263886 kubelet[2517]: W0702 00:03:20.263884 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.263949 kubelet[2517]: E0702 00:03:20.263915 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.264126 kubelet[2517]: E0702 00:03:20.264112 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.264158 kubelet[2517]: W0702 00:03:20.264125 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.264158 kubelet[2517]: E0702 00:03:20.264152 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.264808 kubelet[2517]: E0702 00:03:20.264769 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.264808 kubelet[2517]: W0702 00:03:20.264789 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.264868 kubelet[2517]: E0702 00:03:20.264820 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.266312 kubelet[2517]: E0702 00:03:20.266271 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.266312 kubelet[2517]: W0702 00:03:20.266293 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.266312 kubelet[2517]: E0702 00:03:20.266310 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.282741 kubelet[2517]: E0702 00:03:20.282517 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:20.282741 kubelet[2517]: W0702 00:03:20.282540 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:20.282741 kubelet[2517]: E0702 00:03:20.282567 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:20.295764 containerd[1426]: time="2024-07-02T00:03:20.295643132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:20.295764 containerd[1426]: time="2024-07-02T00:03:20.295736253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:20.296642 containerd[1426]: time="2024-07-02T00:03:20.296547053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:20.296642 containerd[1426]: time="2024-07-02T00:03:20.296570173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:20.298025 kubelet[2517]: E0702 00:03:20.297820 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:20.298347 containerd[1426]: time="2024-07-02T00:03:20.298314174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r5cjm,Uid:f5f2e1f0-3421-43f1-a4bb-277b77b6786e,Namespace:calico-system,Attempt:0,}" Jul 2 00:03:20.321368 systemd[1]: Started cri-containerd-b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d.scope - libcontainer container b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d. Jul 2 00:03:20.324971 containerd[1426]: time="2024-07-02T00:03:20.324240434Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:20.324971 containerd[1426]: time="2024-07-02T00:03:20.324935434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:20.325174 containerd[1426]: time="2024-07-02T00:03:20.324976194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:20.325174 containerd[1426]: time="2024-07-02T00:03:20.325006914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:20.350430 systemd[1]: Started cri-containerd-e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244.scope - libcontainer container e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244. Jul 2 00:03:20.362064 containerd[1426]: time="2024-07-02T00:03:20.362023422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54f8948f9-pwdqd,Uid:993bdee5-6b33-4e35-98b1-9a9c9299205f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\"" Jul 2 00:03:20.368801 kubelet[2517]: E0702 00:03:20.368721 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:20.375012 containerd[1426]: time="2024-07-02T00:03:20.374845032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:03:20.379948 containerd[1426]: time="2024-07-02T00:03:20.379909555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r5cjm,Uid:f5f2e1f0-3421-43f1-a4bb-277b77b6786e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\"" Jul 2 00:03:20.380725 kubelet[2517]: E0702 00:03:20.380692 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:21.457069 kubelet[2517]: E0702 00:03:21.450529 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:23.216413 containerd[1426]: time="2024-07-02T00:03:23.216331546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:23.216889 containerd[1426]: time="2024-07-02T00:03:23.216689306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 00:03:23.217746 containerd[1426]: time="2024-07-02T00:03:23.217689347Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:23.220269 containerd[1426]: time="2024-07-02T00:03:23.220208268Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:23.221100 containerd[1426]: time="2024-07-02T00:03:23.221060869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.846048077s" Jul 2 00:03:23.221173 containerd[1426]: time="2024-07-02T00:03:23.221099549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 00:03:23.222223 containerd[1426]: time="2024-07-02T00:03:23.222192030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:03:23.232169 containerd[1426]: time="2024-07-02T00:03:23.230609155Z" level=info msg="CreateContainer within sandbox \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:03:23.247367 containerd[1426]: time="2024-07-02T00:03:23.247305965Z" level=info msg="CreateContainer within sandbox \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\"" Jul 2 00:03:23.248347 containerd[1426]: time="2024-07-02T00:03:23.248311486Z" level=info msg="StartContainer for \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\"" Jul 2 00:03:23.286666 systemd[1]: Started cri-containerd-04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a.scope - libcontainer container 04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a. Jul 2 00:03:23.398116 containerd[1426]: time="2024-07-02T00:03:23.398055458Z" level=info msg="StartContainer for \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\" returns successfully" Jul 2 00:03:23.450827 kubelet[2517]: E0702 00:03:23.450739 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:23.519271 containerd[1426]: time="2024-07-02T00:03:23.519225492Z" level=info msg="StopContainer for \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\" with timeout 300 (s)" Jul 2 00:03:23.520356 containerd[1426]: time="2024-07-02T00:03:23.520306813Z" level=info msg="Stop container \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\" with signal terminated" Jul 2 00:03:23.531117 systemd[1]: cri-containerd-04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a.scope: Deactivated successfully. Jul 2 00:03:23.534592 kubelet[2517]: I0702 00:03:23.534549 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-54f8948f9-pwdqd" podStartSLOduration=1.685233623 podStartE2EDuration="4.534502942s" podCreationTimestamp="2024-07-02 00:03:19 +0000 UTC" firstStartedPulling="2024-07-02 00:03:20.37227743 +0000 UTC m=+21.031462934" lastFinishedPulling="2024-07-02 00:03:23.221546709 +0000 UTC m=+23.880732253" observedRunningTime="2024-07-02 00:03:23.534138861 +0000 UTC m=+24.193324365" watchObservedRunningTime="2024-07-02 00:03:23.534502942 +0000 UTC m=+24.193688446" Jul 2 00:03:23.564442 containerd[1426]: time="2024-07-02T00:03:23.564379840Z" level=info msg="shim disconnected" id=04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a namespace=k8s.io Jul 2 00:03:23.564442 containerd[1426]: time="2024-07-02T00:03:23.564439320Z" level=warning msg="cleaning up after shim disconnected" id=04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a namespace=k8s.io Jul 2 00:03:23.564442 containerd[1426]: time="2024-07-02T00:03:23.564448200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:03:23.580488 containerd[1426]: time="2024-07-02T00:03:23.580420290Z" level=info msg="StopContainer for \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\" returns successfully" Jul 2 00:03:23.581203 containerd[1426]: time="2024-07-02T00:03:23.581169090Z" level=info msg="StopPodSandbox for \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\"" Jul 2 00:03:23.581280 containerd[1426]: time="2024-07-02T00:03:23.581218450Z" level=info msg="Container to stop \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:03:23.587823 systemd[1]: cri-containerd-b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d.scope: Deactivated successfully. Jul 2 00:03:23.616906 containerd[1426]: time="2024-07-02T00:03:23.616744272Z" level=info msg="shim disconnected" id=b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d namespace=k8s.io Jul 2 00:03:23.616906 containerd[1426]: time="2024-07-02T00:03:23.616819312Z" level=warning msg="cleaning up after shim disconnected" id=b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d namespace=k8s.io Jul 2 00:03:23.616906 containerd[1426]: time="2024-07-02T00:03:23.616898792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:03:23.628693 containerd[1426]: time="2024-07-02T00:03:23.628644839Z" level=info msg="TearDown network for sandbox \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" successfully" Jul 2 00:03:23.628693 containerd[1426]: time="2024-07-02T00:03:23.628681599Z" level=info msg="StopPodSandbox for \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" returns successfully" Jul 2 00:03:23.656566 kubelet[2517]: I0702 00:03:23.655774 2517 topology_manager.go:215] "Topology Admit Handler" podUID="d5a948c7-3e40-45bd-876f-03e67023f6bf" podNamespace="calico-system" podName="calico-typha-857df79b47-5gsqj" Jul 2 00:03:23.656566 kubelet[2517]: E0702 00:03:23.655910 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="993bdee5-6b33-4e35-98b1-9a9c9299205f" containerName="calico-typha" Jul 2 00:03:23.656566 kubelet[2517]: I0702 00:03:23.655991 2517 memory_manager.go:354] "RemoveStaleState removing state" podUID="993bdee5-6b33-4e35-98b1-9a9c9299205f" containerName="calico-typha" Jul 2 00:03:23.661969 systemd[1]: Created slice kubepods-besteffort-podd5a948c7_3e40_45bd_876f_03e67023f6bf.slice - libcontainer container kubepods-besteffort-podd5a948c7_3e40_45bd_876f_03e67023f6bf.slice. Jul 2 00:03:23.674633 kubelet[2517]: E0702 00:03:23.674603 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.674633 kubelet[2517]: W0702 00:03:23.674626 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.674809 kubelet[2517]: E0702 00:03:23.674650 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.674916 kubelet[2517]: E0702 00:03:23.674904 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.674916 kubelet[2517]: W0702 00:03:23.674915 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.674986 kubelet[2517]: E0702 00:03:23.674927 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.675089 kubelet[2517]: E0702 00:03:23.675078 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.675089 kubelet[2517]: W0702 00:03:23.675088 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.675162 kubelet[2517]: E0702 00:03:23.675098 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.675427 kubelet[2517]: E0702 00:03:23.675412 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.675427 kubelet[2517]: W0702 00:03:23.675426 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.675492 kubelet[2517]: E0702 00:03:23.675440 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.675668 kubelet[2517]: E0702 00:03:23.675657 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.675702 kubelet[2517]: W0702 00:03:23.675668 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.675702 kubelet[2517]: E0702 00:03:23.675680 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.675870 kubelet[2517]: E0702 00:03:23.675857 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.675870 kubelet[2517]: W0702 00:03:23.675869 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.675928 kubelet[2517]: E0702 00:03:23.675881 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.676115 kubelet[2517]: E0702 00:03:23.676075 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.676115 kubelet[2517]: W0702 00:03:23.676085 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.676115 kubelet[2517]: E0702 00:03:23.676096 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.676549 kubelet[2517]: E0702 00:03:23.676519 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.676549 kubelet[2517]: W0702 00:03:23.676534 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.676549 kubelet[2517]: E0702 00:03:23.676548 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.676794 kubelet[2517]: E0702 00:03:23.676775 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.676794 kubelet[2517]: W0702 00:03:23.676792 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.676863 kubelet[2517]: E0702 00:03:23.676805 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.676957 kubelet[2517]: E0702 00:03:23.676948 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.676957 kubelet[2517]: W0702 00:03:23.676957 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.677013 kubelet[2517]: E0702 00:03:23.676967 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.677102 kubelet[2517]: E0702 00:03:23.677093 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.677102 kubelet[2517]: W0702 00:03:23.677102 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.677201 kubelet[2517]: E0702 00:03:23.677112 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.677285 kubelet[2517]: E0702 00:03:23.677274 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.677285 kubelet[2517]: W0702 00:03:23.677284 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.677342 kubelet[2517]: E0702 00:03:23.677295 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.682183 kubelet[2517]: E0702 00:03:23.682125 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.682183 kubelet[2517]: W0702 00:03:23.682178 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.682339 kubelet[2517]: E0702 00:03:23.682199 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.682339 kubelet[2517]: I0702 00:03:23.682247 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/993bdee5-6b33-4e35-98b1-9a9c9299205f-typha-certs\") pod \"993bdee5-6b33-4e35-98b1-9a9c9299205f\" (UID: \"993bdee5-6b33-4e35-98b1-9a9c9299205f\") " Jul 2 00:03:23.682634 kubelet[2517]: E0702 00:03:23.682463 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.682634 kubelet[2517]: W0702 00:03:23.682476 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.682634 kubelet[2517]: E0702 00:03:23.682488 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.682634 kubelet[2517]: I0702 00:03:23.682516 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27qsw\" (UniqueName: \"kubernetes.io/projected/993bdee5-6b33-4e35-98b1-9a9c9299205f-kube-api-access-27qsw\") pod \"993bdee5-6b33-4e35-98b1-9a9c9299205f\" (UID: \"993bdee5-6b33-4e35-98b1-9a9c9299205f\") " Jul 2 00:03:23.682859 kubelet[2517]: E0702 00:03:23.682838 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.682943 kubelet[2517]: W0702 00:03:23.682921 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.683054 kubelet[2517]: E0702 00:03:23.683002 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.683662 kubelet[2517]: E0702 00:03:23.683325 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.683662 kubelet[2517]: W0702 00:03:23.683341 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.683662 kubelet[2517]: E0702 00:03:23.683364 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.683662 kubelet[2517]: I0702 00:03:23.683394 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993bdee5-6b33-4e35-98b1-9a9c9299205f-tigera-ca-bundle\") pod \"993bdee5-6b33-4e35-98b1-9a9c9299205f\" (UID: \"993bdee5-6b33-4e35-98b1-9a9c9299205f\") " Jul 2 00:03:23.683662 kubelet[2517]: E0702 00:03:23.683635 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.683662 kubelet[2517]: W0702 00:03:23.683651 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.683662 kubelet[2517]: E0702 00:03:23.683671 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.684021 kubelet[2517]: E0702 00:03:23.683996 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.684021 kubelet[2517]: W0702 00:03:23.684014 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.684083 kubelet[2517]: E0702 00:03:23.684036 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.684083 kubelet[2517]: I0702 00:03:23.684068 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5f28\" (UniqueName: \"kubernetes.io/projected/d5a948c7-3e40-45bd-876f-03e67023f6bf-kube-api-access-g5f28\") pod \"calico-typha-857df79b47-5gsqj\" (UID: \"d5a948c7-3e40-45bd-876f-03e67023f6bf\") " pod="calico-system/calico-typha-857df79b47-5gsqj" Jul 2 00:03:23.684441 kubelet[2517]: E0702 00:03:23.684276 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.684441 kubelet[2517]: W0702 00:03:23.684290 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.684441 kubelet[2517]: E0702 00:03:23.684302 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.684441 kubelet[2517]: I0702 00:03:23.684325 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d5a948c7-3e40-45bd-876f-03e67023f6bf-typha-certs\") pod \"calico-typha-857df79b47-5gsqj\" (UID: \"d5a948c7-3e40-45bd-876f-03e67023f6bf\") " pod="calico-system/calico-typha-857df79b47-5gsqj" Jul 2 00:03:23.684574 kubelet[2517]: E0702 00:03:23.684480 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.684574 kubelet[2517]: W0702 00:03:23.684489 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.684574 kubelet[2517]: E0702 00:03:23.684500 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.684574 kubelet[2517]: I0702 00:03:23.684518 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5a948c7-3e40-45bd-876f-03e67023f6bf-tigera-ca-bundle\") pod \"calico-typha-857df79b47-5gsqj\" (UID: \"d5a948c7-3e40-45bd-876f-03e67023f6bf\") " pod="calico-system/calico-typha-857df79b47-5gsqj" Jul 2 00:03:23.689380 kubelet[2517]: E0702 00:03:23.685292 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.689380 kubelet[2517]: W0702 00:03:23.685304 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.689380 kubelet[2517]: E0702 00:03:23.685318 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.691323 kubelet[2517]: E0702 00:03:23.691137 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.691323 kubelet[2517]: W0702 00:03:23.691163 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.691323 kubelet[2517]: E0702 00:03:23.691185 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.697028 kubelet[2517]: E0702 00:03:23.696827 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.697028 kubelet[2517]: W0702 00:03:23.696852 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.697028 kubelet[2517]: E0702 00:03:23.696876 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.697299 kubelet[2517]: E0702 00:03:23.697284 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.697481 kubelet[2517]: W0702 00:03:23.697460 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.697708 kubelet[2517]: E0702 00:03:23.697672 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.698053 kubelet[2517]: I0702 00:03:23.697857 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993bdee5-6b33-4e35-98b1-9a9c9299205f-kube-api-access-27qsw" (OuterVolumeSpecName: "kube-api-access-27qsw") pod "993bdee5-6b33-4e35-98b1-9a9c9299205f" (UID: "993bdee5-6b33-4e35-98b1-9a9c9299205f"). InnerVolumeSpecName "kube-api-access-27qsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:03:23.699192 kubelet[2517]: I0702 00:03:23.698156 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993bdee5-6b33-4e35-98b1-9a9c9299205f-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "993bdee5-6b33-4e35-98b1-9a9c9299205f" (UID: "993bdee5-6b33-4e35-98b1-9a9c9299205f"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:03:23.699296 kubelet[2517]: E0702 00:03:23.699272 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.699296 kubelet[2517]: W0702 00:03:23.699293 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.699371 kubelet[2517]: E0702 00:03:23.699318 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.699709 kubelet[2517]: E0702 00:03:23.699645 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.699709 kubelet[2517]: W0702 00:03:23.699704 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.699984 kubelet[2517]: E0702 00:03:23.699924 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.700053 kubelet[2517]: E0702 00:03:23.700004 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.700053 kubelet[2517]: W0702 00:03:23.700016 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.700053 kubelet[2517]: E0702 00:03:23.700031 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.700497 kubelet[2517]: I0702 00:03:23.700471 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/993bdee5-6b33-4e35-98b1-9a9c9299205f-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "993bdee5-6b33-4e35-98b1-9a9c9299205f" (UID: "993bdee5-6b33-4e35-98b1-9a9c9299205f"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:03:23.785462 kubelet[2517]: E0702 00:03:23.785349 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.785462 kubelet[2517]: W0702 00:03:23.785372 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.785462 kubelet[2517]: E0702 00:03:23.785393 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.786272 kubelet[2517]: E0702 00:03:23.786248 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.786272 kubelet[2517]: W0702 00:03:23.786268 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.786396 kubelet[2517]: E0702 00:03:23.786292 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.786598 kubelet[2517]: E0702 00:03:23.786547 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.786598 kubelet[2517]: W0702 00:03:23.786560 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.786598 kubelet[2517]: E0702 00:03:23.786575 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.786700 kubelet[2517]: I0702 00:03:23.786631 2517 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/993bdee5-6b33-4e35-98b1-9a9c9299205f-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:23.786700 kubelet[2517]: I0702 00:03:23.786643 2517 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/993bdee5-6b33-4e35-98b1-9a9c9299205f-typha-certs\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:23.786700 kubelet[2517]: I0702 00:03:23.786654 2517 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-27qsw\" (UniqueName: \"kubernetes.io/projected/993bdee5-6b33-4e35-98b1-9a9c9299205f-kube-api-access-27qsw\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:23.786856 kubelet[2517]: E0702 00:03:23.786836 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.786856 kubelet[2517]: W0702 00:03:23.786851 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.786986 kubelet[2517]: E0702 00:03:23.786947 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.787032 kubelet[2517]: E0702 00:03:23.787015 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.787032 kubelet[2517]: W0702 00:03:23.787023 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.787107 kubelet[2517]: E0702 00:03:23.787070 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.787259 kubelet[2517]: E0702 00:03:23.787191 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.787259 kubelet[2517]: W0702 00:03:23.787202 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.787259 kubelet[2517]: E0702 00:03:23.787217 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.787381 kubelet[2517]: E0702 00:03:23.787368 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.787381 kubelet[2517]: W0702 00:03:23.787378 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.787455 kubelet[2517]: E0702 00:03:23.787389 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.787974 kubelet[2517]: E0702 00:03:23.787504 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.787974 kubelet[2517]: W0702 00:03:23.787514 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.787974 kubelet[2517]: E0702 00:03:23.787524 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.787974 kubelet[2517]: E0702 00:03:23.787669 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.787974 kubelet[2517]: W0702 00:03:23.787675 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.787974 kubelet[2517]: E0702 00:03:23.787685 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.788448 kubelet[2517]: E0702 00:03:23.788318 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.788448 kubelet[2517]: W0702 00:03:23.788336 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.788448 kubelet[2517]: E0702 00:03:23.788355 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.788735 kubelet[2517]: E0702 00:03:23.788718 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.788946 kubelet[2517]: W0702 00:03:23.788817 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.788946 kubelet[2517]: E0702 00:03:23.788878 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.789334 kubelet[2517]: E0702 00:03:23.789209 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.789736 kubelet[2517]: W0702 00:03:23.789294 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.789736 kubelet[2517]: E0702 00:03:23.789420 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.789915 kubelet[2517]: E0702 00:03:23.789898 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.789967 kubelet[2517]: W0702 00:03:23.789955 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.790034 kubelet[2517]: E0702 00:03:23.790024 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.790320 kubelet[2517]: E0702 00:03:23.790304 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.790390 kubelet[2517]: W0702 00:03:23.790378 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.790459 kubelet[2517]: E0702 00:03:23.790449 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.790932 kubelet[2517]: E0702 00:03:23.790913 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.791013 kubelet[2517]: W0702 00:03:23.791001 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.791208 kubelet[2517]: E0702 00:03:23.791192 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.791626 kubelet[2517]: E0702 00:03:23.791541 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.791626 kubelet[2517]: W0702 00:03:23.791558 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.791626 kubelet[2517]: E0702 00:03:23.791573 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.795825 kubelet[2517]: E0702 00:03:23.795759 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.796641 kubelet[2517]: W0702 00:03:23.796612 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.796715 kubelet[2517]: E0702 00:03:23.796659 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.801297 kubelet[2517]: E0702 00:03:23.801262 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:23.801395 kubelet[2517]: W0702 00:03:23.801310 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:23.801395 kubelet[2517]: E0702 00:03:23.801334 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:23.964226 kubelet[2517]: E0702 00:03:23.964183 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:23.964803 containerd[1426]: time="2024-07-02T00:03:23.964740966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-857df79b47-5gsqj,Uid:d5a948c7-3e40-45bd-876f-03e67023f6bf,Namespace:calico-system,Attempt:0,}" Jul 2 00:03:23.990054 containerd[1426]: time="2024-07-02T00:03:23.989914301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:23.990054 containerd[1426]: time="2024-07-02T00:03:23.990012581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:23.990054 containerd[1426]: time="2024-07-02T00:03:23.990043582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:23.990054 containerd[1426]: time="2024-07-02T00:03:23.990058702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:24.016361 systemd[1]: Started cri-containerd-4bb19bd021569d1b0db0066deaaa20a2799276f3bedef22e6dd61781a6e06b54.scope - libcontainer container 4bb19bd021569d1b0db0066deaaa20a2799276f3bedef22e6dd61781a6e06b54. Jul 2 00:03:24.046121 containerd[1426]: time="2024-07-02T00:03:24.045983134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-857df79b47-5gsqj,Uid:d5a948c7-3e40-45bd-876f-03e67023f6bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"4bb19bd021569d1b0db0066deaaa20a2799276f3bedef22e6dd61781a6e06b54\"" Jul 2 00:03:24.046695 kubelet[2517]: E0702 00:03:24.046666 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:24.056427 containerd[1426]: time="2024-07-02T00:03:24.056385900Z" level=info msg="CreateContainer within sandbox \"4bb19bd021569d1b0db0066deaaa20a2799276f3bedef22e6dd61781a6e06b54\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:03:24.066388 containerd[1426]: time="2024-07-02T00:03:24.066328586Z" level=info msg="CreateContainer within sandbox \"4bb19bd021569d1b0db0066deaaa20a2799276f3bedef22e6dd61781a6e06b54\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d7a8a06398f8d50e670b0a537003dc3415cd3cfc776d73da64e11238dac3fcb6\"" Jul 2 00:03:24.066886 containerd[1426]: time="2024-07-02T00:03:24.066844386Z" level=info msg="StartContainer for \"d7a8a06398f8d50e670b0a537003dc3415cd3cfc776d73da64e11238dac3fcb6\"" Jul 2 00:03:24.094627 systemd[1]: Started cri-containerd-d7a8a06398f8d50e670b0a537003dc3415cd3cfc776d73da64e11238dac3fcb6.scope - libcontainer container d7a8a06398f8d50e670b0a537003dc3415cd3cfc776d73da64e11238dac3fcb6. Jul 2 00:03:24.130623 containerd[1426]: time="2024-07-02T00:03:24.130561423Z" level=info msg="StartContainer for \"d7a8a06398f8d50e670b0a537003dc3415cd3cfc776d73da64e11238dac3fcb6\" returns successfully" Jul 2 00:03:24.232111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a-rootfs.mount: Deactivated successfully. Jul 2 00:03:24.232216 systemd[1]: var-lib-kubelet-pods-993bdee5\x2d6b33\x2d4e35\x2d98b1\x2d9a9c9299205f-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jul 2 00:03:24.232272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d-rootfs.mount: Deactivated successfully. Jul 2 00:03:24.232317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d-shm.mount: Deactivated successfully. Jul 2 00:03:24.232376 systemd[1]: var-lib-kubelet-pods-993bdee5\x2d6b33\x2d4e35\x2d98b1\x2d9a9c9299205f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d27qsw.mount: Deactivated successfully. Jul 2 00:03:24.232423 systemd[1]: var-lib-kubelet-pods-993bdee5\x2d6b33\x2d4e35\x2d98b1\x2d9a9c9299205f-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jul 2 00:03:24.521334 kubelet[2517]: I0702 00:03:24.521256 2517 scope.go:117] "RemoveContainer" containerID="04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a" Jul 2 00:03:24.523140 containerd[1426]: time="2024-07-02T00:03:24.522599329Z" level=info msg="RemoveContainer for \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\"" Jul 2 00:03:24.526327 kubelet[2517]: E0702 00:03:24.526302 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:24.531008 systemd[1]: Removed slice kubepods-besteffort-pod993bdee5_6b33_4e35_98b1_9a9c9299205f.slice - libcontainer container kubepods-besteffort-pod993bdee5_6b33_4e35_98b1_9a9c9299205f.slice. Jul 2 00:03:24.532019 containerd[1426]: time="2024-07-02T00:03:24.531564614Z" level=info msg="RemoveContainer for \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\" returns successfully" Jul 2 00:03:24.539225 kubelet[2517]: I0702 00:03:24.539170 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-857df79b47-5gsqj" podStartSLOduration=4.539109698 podStartE2EDuration="4.539109698s" podCreationTimestamp="2024-07-02 00:03:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:03:24.538941698 +0000 UTC m=+25.198127202" watchObservedRunningTime="2024-07-02 00:03:24.539109698 +0000 UTC m=+25.198295202" Jul 2 00:03:24.539969 kubelet[2517]: I0702 00:03:24.539670 2517 scope.go:117] "RemoveContainer" containerID="04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a" Jul 2 00:03:24.540719 containerd[1426]: time="2024-07-02T00:03:24.540604099Z" level=error msg="ContainerStatus for \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\": not found" Jul 2 00:03:24.541159 kubelet[2517]: E0702 00:03:24.541067 2517 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\": not found" containerID="04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a" Jul 2 00:03:24.541159 kubelet[2517]: I0702 00:03:24.541131 2517 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a"} err="failed to get container status \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\": rpc error: code = NotFound desc = an error occurred when try to find container \"04432162842959cca3242315a74ca6d84e375564568a9bb4f0f57f4283bf0c0a\": not found" Jul 2 00:03:24.582836 kubelet[2517]: E0702 00:03:24.582696 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.582836 kubelet[2517]: W0702 00:03:24.582723 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.582836 kubelet[2517]: E0702 00:03:24.582747 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.583100 kubelet[2517]: E0702 00:03:24.583086 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.583201 kubelet[2517]: W0702 00:03:24.583186 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.583353 kubelet[2517]: E0702 00:03:24.583258 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.583461 kubelet[2517]: E0702 00:03:24.583450 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.583520 kubelet[2517]: W0702 00:03:24.583508 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.583577 kubelet[2517]: E0702 00:03:24.583568 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.583931 kubelet[2517]: E0702 00:03:24.583819 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.583931 kubelet[2517]: W0702 00:03:24.583833 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.583931 kubelet[2517]: E0702 00:03:24.583846 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.584104 kubelet[2517]: E0702 00:03:24.584092 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.584180 kubelet[2517]: W0702 00:03:24.584167 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.584330 kubelet[2517]: E0702 00:03:24.584238 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.584432 kubelet[2517]: E0702 00:03:24.584420 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.584489 kubelet[2517]: W0702 00:03:24.584479 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.584546 kubelet[2517]: E0702 00:03:24.584536 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.584878 kubelet[2517]: E0702 00:03:24.584750 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.584878 kubelet[2517]: W0702 00:03:24.584762 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.584878 kubelet[2517]: E0702 00:03:24.584776 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.585040 kubelet[2517]: E0702 00:03:24.585027 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.585094 kubelet[2517]: W0702 00:03:24.585083 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.585168 kubelet[2517]: E0702 00:03:24.585138 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.585407 kubelet[2517]: E0702 00:03:24.585393 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.585550 kubelet[2517]: W0702 00:03:24.585459 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.585550 kubelet[2517]: E0702 00:03:24.585478 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.585680 kubelet[2517]: E0702 00:03:24.585668 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.585734 kubelet[2517]: W0702 00:03:24.585723 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.585800 kubelet[2517]: E0702 00:03:24.585780 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.586094 kubelet[2517]: E0702 00:03:24.586003 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.586094 kubelet[2517]: W0702 00:03:24.586015 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.586094 kubelet[2517]: E0702 00:03:24.586027 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.586315 kubelet[2517]: E0702 00:03:24.586302 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.586373 kubelet[2517]: W0702 00:03:24.586362 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.586511 kubelet[2517]: E0702 00:03:24.586415 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.586614 kubelet[2517]: E0702 00:03:24.586603 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.586678 kubelet[2517]: W0702 00:03:24.586666 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.586730 kubelet[2517]: E0702 00:03:24.586722 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.586948 kubelet[2517]: E0702 00:03:24.586936 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.587088 kubelet[2517]: W0702 00:03:24.586999 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.587088 kubelet[2517]: E0702 00:03:24.587016 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.587256 kubelet[2517]: E0702 00:03:24.587243 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.587321 kubelet[2517]: W0702 00:03:24.587309 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.587371 kubelet[2517]: E0702 00:03:24.587362 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.592017 kubelet[2517]: E0702 00:03:24.591996 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.592017 kubelet[2517]: W0702 00:03:24.592011 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.592311 kubelet[2517]: E0702 00:03:24.592029 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.592311 kubelet[2517]: E0702 00:03:24.592268 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.592311 kubelet[2517]: W0702 00:03:24.592278 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.592311 kubelet[2517]: E0702 00:03:24.592295 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.593060 kubelet[2517]: E0702 00:03:24.592527 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.593060 kubelet[2517]: W0702 00:03:24.592536 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.593060 kubelet[2517]: E0702 00:03:24.592548 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.593060 kubelet[2517]: E0702 00:03:24.592742 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.593060 kubelet[2517]: W0702 00:03:24.592754 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.593060 kubelet[2517]: E0702 00:03:24.592766 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.593060 kubelet[2517]: E0702 00:03:24.593036 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.593060 kubelet[2517]: W0702 00:03:24.593045 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.593060 kubelet[2517]: E0702 00:03:24.593063 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.593523 kubelet[2517]: E0702 00:03:24.593260 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.593523 kubelet[2517]: W0702 00:03:24.593268 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.593523 kubelet[2517]: E0702 00:03:24.593285 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.593523 kubelet[2517]: E0702 00:03:24.593472 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.593523 kubelet[2517]: W0702 00:03:24.593479 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.593649 kubelet[2517]: E0702 00:03:24.593596 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.593649 kubelet[2517]: W0702 00:03:24.593603 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.594022 kubelet[2517]: E0702 00:03:24.593717 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.594022 kubelet[2517]: W0702 00:03:24.593726 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.594022 kubelet[2517]: E0702 00:03:24.593732 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.594022 kubelet[2517]: E0702 00:03:24.593747 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.594022 kubelet[2517]: E0702 00:03:24.593736 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.594261 kubelet[2517]: E0702 00:03:24.594061 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.594261 kubelet[2517]: W0702 00:03:24.594071 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.594261 kubelet[2517]: E0702 00:03:24.594086 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.594332 kubelet[2517]: E0702 00:03:24.594272 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.594332 kubelet[2517]: W0702 00:03:24.594280 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.594332 kubelet[2517]: E0702 00:03:24.594290 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.594526 kubelet[2517]: E0702 00:03:24.594488 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.594526 kubelet[2517]: W0702 00:03:24.594495 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.594526 kubelet[2517]: E0702 00:03:24.594510 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.594850 kubelet[2517]: E0702 00:03:24.594835 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.595045 kubelet[2517]: W0702 00:03:24.594932 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.595045 kubelet[2517]: E0702 00:03:24.594959 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.595352 kubelet[2517]: E0702 00:03:24.595337 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.595501 kubelet[2517]: W0702 00:03:24.595427 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.595501 kubelet[2517]: E0702 00:03:24.595461 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.595897 kubelet[2517]: E0702 00:03:24.595765 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.595897 kubelet[2517]: W0702 00:03:24.595778 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.595897 kubelet[2517]: E0702 00:03:24.595807 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.596243 kubelet[2517]: E0702 00:03:24.596136 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.596243 kubelet[2517]: W0702 00:03:24.596193 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.596243 kubelet[2517]: E0702 00:03:24.596229 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.596605 kubelet[2517]: E0702 00:03:24.596524 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.596605 kubelet[2517]: W0702 00:03:24.596538 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.596605 kubelet[2517]: E0702 00:03:24.596558 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.596923 kubelet[2517]: E0702 00:03:24.596909 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:03:24.597050 kubelet[2517]: W0702 00:03:24.596972 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:03:24.597128 kubelet[2517]: E0702 00:03:24.597098 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:03:24.827505 containerd[1426]: time="2024-07-02T00:03:24.827384704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:24.832843 containerd[1426]: time="2024-07-02T00:03:24.832797588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 00:03:24.834002 containerd[1426]: time="2024-07-02T00:03:24.833956028Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:24.836298 containerd[1426]: time="2024-07-02T00:03:24.836075909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:24.837027 containerd[1426]: time="2024-07-02T00:03:24.836971670Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.61474256s" Jul 2 00:03:24.837027 containerd[1426]: time="2024-07-02T00:03:24.837014430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 00:03:24.838682 containerd[1426]: time="2024-07-02T00:03:24.838650231Z" level=info msg="CreateContainer within sandbox \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:03:24.851606 containerd[1426]: time="2024-07-02T00:03:24.851551998Z" level=info msg="CreateContainer within sandbox \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35\"" Jul 2 00:03:24.852178 containerd[1426]: time="2024-07-02T00:03:24.852038279Z" level=info msg="StartContainer for \"5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35\"" Jul 2 00:03:24.873738 systemd[1]: run-containerd-runc-k8s.io-5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35-runc.z0FcPx.mount: Deactivated successfully. Jul 2 00:03:24.894395 systemd[1]: Started cri-containerd-5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35.scope - libcontainer container 5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35. Jul 2 00:03:24.925242 containerd[1426]: time="2024-07-02T00:03:24.925182841Z" level=info msg="StartContainer for \"5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35\" returns successfully" Jul 2 00:03:24.965918 systemd[1]: cri-containerd-5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35.scope: Deactivated successfully. Jul 2 00:03:24.997436 containerd[1426]: time="2024-07-02T00:03:24.997233922Z" level=info msg="shim disconnected" id=5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35 namespace=k8s.io Jul 2 00:03:24.997436 containerd[1426]: time="2024-07-02T00:03:24.997291122Z" level=warning msg="cleaning up after shim disconnected" id=5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35 namespace=k8s.io Jul 2 00:03:24.997436 containerd[1426]: time="2024-07-02T00:03:24.997299242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:03:25.227490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35-rootfs.mount: Deactivated successfully. Jul 2 00:03:25.452265 kubelet[2517]: E0702 00:03:25.452084 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:25.454157 kubelet[2517]: I0702 00:03:25.453547 2517 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="993bdee5-6b33-4e35-98b1-9a9c9299205f" path="/var/lib/kubelet/pods/993bdee5-6b33-4e35-98b1-9a9c9299205f/volumes" Jul 2 00:03:25.528116 containerd[1426]: time="2024-07-02T00:03:25.528076809Z" level=info msg="StopPodSandbox for \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\"" Jul 2 00:03:25.530276 containerd[1426]: time="2024-07-02T00:03:25.530211610Z" level=info msg="Container to stop \"5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:03:25.536326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244-shm.mount: Deactivated successfully. Jul 2 00:03:25.542017 systemd[1]: cri-containerd-e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244.scope: Deactivated successfully. Jul 2 00:03:25.566338 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244-rootfs.mount: Deactivated successfully. Jul 2 00:03:25.576624 containerd[1426]: time="2024-07-02T00:03:25.576565795Z" level=info msg="shim disconnected" id=e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244 namespace=k8s.io Jul 2 00:03:25.576624 containerd[1426]: time="2024-07-02T00:03:25.576615075Z" level=warning msg="cleaning up after shim disconnected" id=e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244 namespace=k8s.io Jul 2 00:03:25.576624 containerd[1426]: time="2024-07-02T00:03:25.576624595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:03:25.588093 containerd[1426]: time="2024-07-02T00:03:25.588032242Z" level=info msg="TearDown network for sandbox \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" successfully" Jul 2 00:03:25.588093 containerd[1426]: time="2024-07-02T00:03:25.588069682Z" level=info msg="StopPodSandbox for \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" returns successfully" Jul 2 00:03:25.703396 kubelet[2517]: I0702 00:03:25.703333 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-lib-calico\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.703396 kubelet[2517]: I0702 00:03:25.703381 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cftwp\" (UniqueName: \"kubernetes.io/projected/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-kube-api-access-cftwp\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.704089 kubelet[2517]: I0702 00:03:25.703403 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-policysync\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.704166 kubelet[2517]: I0702 00:03:25.704112 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-flexvol-driver-host\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705044 kubelet[2517]: I0702 00:03:25.704275 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-tigera-ca-bundle\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705044 kubelet[2517]: I0702 00:03:25.704310 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-run-calico\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705044 kubelet[2517]: I0702 00:03:25.704333 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-xtables-lock\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705044 kubelet[2517]: I0702 00:03:25.704355 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-node-certs\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705044 kubelet[2517]: I0702 00:03:25.704372 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-net-dir\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705044 kubelet[2517]: I0702 00:03:25.704391 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-log-dir\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705248 kubelet[2517]: I0702 00:03:25.704409 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-lib-modules\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705248 kubelet[2517]: I0702 00:03:25.704430 2517 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-bin-dir\") pod \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\" (UID: \"f5f2e1f0-3421-43f1-a4bb-277b77b6786e\") " Jul 2 00:03:25.705248 kubelet[2517]: I0702 00:03:25.704494 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705248 kubelet[2517]: I0702 00:03:25.704511 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705248 kubelet[2517]: I0702 00:03:25.703425 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705356 kubelet[2517]: I0702 00:03:25.703446 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-policysync" (OuterVolumeSpecName: "policysync") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705356 kubelet[2517]: I0702 00:03:25.704559 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705356 kubelet[2517]: I0702 00:03:25.704900 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:03:25.705356 kubelet[2517]: I0702 00:03:25.704937 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705356 kubelet[2517]: I0702 00:03:25.704971 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705491 kubelet[2517]: I0702 00:03:25.704988 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.705491 kubelet[2517]: I0702 00:03:25.705005 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:03:25.706267 kubelet[2517]: I0702 00:03:25.706234 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-kube-api-access-cftwp" (OuterVolumeSpecName: "kube-api-access-cftwp") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "kube-api-access-cftwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:03:25.711290 kubelet[2517]: I0702 00:03:25.711247 2517 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-node-certs" (OuterVolumeSpecName: "node-certs") pod "f5f2e1f0-3421-43f1-a4bb-277b77b6786e" (UID: "f5f2e1f0-3421-43f1-a4bb-277b77b6786e"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:03:25.712052 systemd[1]: var-lib-kubelet-pods-f5f2e1f0\x2d3421\x2d43f1\x2da4bb\x2d277b77b6786e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcftwp.mount: Deactivated successfully. Jul 2 00:03:25.717581 systemd[1]: var-lib-kubelet-pods-f5f2e1f0\x2d3421\x2d43f1\x2da4bb\x2d277b77b6786e-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805125 2517 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-policysync\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805352 2517 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805371 2517 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-node-certs\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805382 2517 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805391 2517 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805402 2517 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805411 2517 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806199 kubelet[2517]: I0702 00:03:25.805420 2517 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806560 kubelet[2517]: I0702 00:03:25.805428 2517 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806560 kubelet[2517]: I0702 00:03:25.805438 2517 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806560 kubelet[2517]: I0702 00:03:25.805451 2517 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:25.806560 kubelet[2517]: I0702 00:03:25.805462 2517 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cftwp\" (UniqueName: \"kubernetes.io/projected/f5f2e1f0-3421-43f1-a4bb-277b77b6786e-kube-api-access-cftwp\") on node \"localhost\" DevicePath \"\"" Jul 2 00:03:26.530625 kubelet[2517]: I0702 00:03:26.530432 2517 scope.go:117] "RemoveContainer" containerID="5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35" Jul 2 00:03:26.532579 containerd[1426]: time="2024-07-02T00:03:26.532247014Z" level=info msg="RemoveContainer for \"5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35\"" Jul 2 00:03:26.536029 containerd[1426]: time="2024-07-02T00:03:26.535927536Z" level=info msg="RemoveContainer for \"5a5f31e905572f9ddc2db04d3f5fdbc860a36b693f5f0a130c6437c593282e35\" returns successfully" Jul 2 00:03:26.536351 systemd[1]: Removed slice kubepods-besteffort-podf5f2e1f0_3421_43f1_a4bb_277b77b6786e.slice - libcontainer container kubepods-besteffort-podf5f2e1f0_3421_43f1_a4bb_277b77b6786e.slice. Jul 2 00:03:26.569583 kubelet[2517]: I0702 00:03:26.569538 2517 topology_manager.go:215] "Topology Admit Handler" podUID="615c90ab-f3ad-4912-b9cd-dc0e7dafb097" podNamespace="calico-system" podName="calico-node-qltzj" Jul 2 00:03:26.569795 kubelet[2517]: E0702 00:03:26.569596 2517 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5f2e1f0-3421-43f1-a4bb-277b77b6786e" containerName="flexvol-driver" Jul 2 00:03:26.569795 kubelet[2517]: I0702 00:03:26.569638 2517 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5f2e1f0-3421-43f1-a4bb-277b77b6786e" containerName="flexvol-driver" Jul 2 00:03:26.578836 systemd[1]: Created slice kubepods-besteffort-pod615c90ab_f3ad_4912_b9cd_dc0e7dafb097.slice - libcontainer container kubepods-besteffort-pod615c90ab_f3ad_4912_b9cd_dc0e7dafb097.slice. Jul 2 00:03:26.611678 kubelet[2517]: I0702 00:03:26.611523 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-lib-modules\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.611678 kubelet[2517]: I0702 00:03:26.611568 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-tigera-ca-bundle\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.611678 kubelet[2517]: I0702 00:03:26.611588 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-node-certs\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.611919 kubelet[2517]: I0702 00:03:26.611738 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-xtables-lock\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.611919 kubelet[2517]: I0702 00:03:26.611783 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-var-lib-calico\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.611919 kubelet[2517]: I0702 00:03:26.611807 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-policysync\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.611919 kubelet[2517]: I0702 00:03:26.611828 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-cni-log-dir\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.611919 kubelet[2517]: I0702 00:03:26.611850 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-flexvol-driver-host\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.612030 kubelet[2517]: I0702 00:03:26.611872 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxg6c\" (UniqueName: \"kubernetes.io/projected/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-kube-api-access-jxg6c\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.612030 kubelet[2517]: I0702 00:03:26.611896 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-var-run-calico\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.612030 kubelet[2517]: I0702 00:03:26.611916 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-cni-bin-dir\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.612030 kubelet[2517]: I0702 00:03:26.611934 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/615c90ab-f3ad-4912-b9cd-dc0e7dafb097-cni-net-dir\") pod \"calico-node-qltzj\" (UID: \"615c90ab-f3ad-4912-b9cd-dc0e7dafb097\") " pod="calico-system/calico-node-qltzj" Jul 2 00:03:26.884711 kubelet[2517]: E0702 00:03:26.884492 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:26.885922 containerd[1426]: time="2024-07-02T00:03:26.885363153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qltzj,Uid:615c90ab-f3ad-4912-b9cd-dc0e7dafb097,Namespace:calico-system,Attempt:0,}" Jul 2 00:03:26.902010 containerd[1426]: time="2024-07-02T00:03:26.901909921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:26.902010 containerd[1426]: time="2024-07-02T00:03:26.901976761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:26.902010 containerd[1426]: time="2024-07-02T00:03:26.902004481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:26.902435 containerd[1426]: time="2024-07-02T00:03:26.902394881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:26.931369 systemd[1]: Started cri-containerd-b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0.scope - libcontainer container b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0. Jul 2 00:03:26.949065 containerd[1426]: time="2024-07-02T00:03:26.949003065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qltzj,Uid:615c90ab-f3ad-4912-b9cd-dc0e7dafb097,Namespace:calico-system,Attempt:0,} returns sandbox id \"b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0\"" Jul 2 00:03:26.949785 kubelet[2517]: E0702 00:03:26.949754 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:26.951608 containerd[1426]: time="2024-07-02T00:03:26.951566586Z" level=info msg="CreateContainer within sandbox \"b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:03:26.962590 containerd[1426]: time="2024-07-02T00:03:26.962541512Z" level=info msg="CreateContainer within sandbox \"b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad\"" Jul 2 00:03:26.963095 containerd[1426]: time="2024-07-02T00:03:26.962997512Z" level=info msg="StartContainer for \"014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad\"" Jul 2 00:03:26.997389 systemd[1]: Started cri-containerd-014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad.scope - libcontainer container 014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad. Jul 2 00:03:27.023985 containerd[1426]: time="2024-07-02T00:03:27.023942622Z" level=info msg="StartContainer for \"014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad\" returns successfully" Jul 2 00:03:27.035358 systemd[1]: cri-containerd-014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad.scope: Deactivated successfully. Jul 2 00:03:27.068513 containerd[1426]: time="2024-07-02T00:03:27.068396523Z" level=info msg="shim disconnected" id=014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad namespace=k8s.io Jul 2 00:03:27.068513 containerd[1426]: time="2024-07-02T00:03:27.068445083Z" level=warning msg="cleaning up after shim disconnected" id=014e2d1635a96ca825d3c959c79e845f0a4566af6e2018a2a1dbd542d0bba1ad namespace=k8s.io Jul 2 00:03:27.068513 containerd[1426]: time="2024-07-02T00:03:27.068453763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:03:27.451192 kubelet[2517]: E0702 00:03:27.450395 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:27.457057 kubelet[2517]: I0702 00:03:27.457007 2517 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f5f2e1f0-3421-43f1-a4bb-277b77b6786e" path="/var/lib/kubelet/pods/f5f2e1f0-3421-43f1-a4bb-277b77b6786e/volumes" Jul 2 00:03:27.533587 kubelet[2517]: E0702 00:03:27.533551 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:27.535438 containerd[1426]: time="2024-07-02T00:03:27.534671865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:03:29.450611 kubelet[2517]: E0702 00:03:29.450544 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:31.451334 kubelet[2517]: E0702 00:03:31.450873 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:32.382465 containerd[1426]: time="2024-07-02T00:03:32.382414398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:32.383716 containerd[1426]: time="2024-07-02T00:03:32.383683758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 00:03:32.384946 containerd[1426]: time="2024-07-02T00:03:32.384912078Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:32.389047 containerd[1426]: time="2024-07-02T00:03:32.388991520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:32.389800 containerd[1426]: time="2024-07-02T00:03:32.389747360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.853928735s" Jul 2 00:03:32.389800 containerd[1426]: time="2024-07-02T00:03:32.389793480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 00:03:32.398126 containerd[1426]: time="2024-07-02T00:03:32.398056203Z" level=info msg="CreateContainer within sandbox \"b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:03:32.414330 containerd[1426]: time="2024-07-02T00:03:32.414286049Z" level=info msg="CreateContainer within sandbox \"b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed\"" Jul 2 00:03:32.417678 containerd[1426]: time="2024-07-02T00:03:32.417641970Z" level=info msg="StartContainer for \"a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed\"" Jul 2 00:03:32.449384 systemd[1]: Started cri-containerd-a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed.scope - libcontainer container a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed. Jul 2 00:03:32.480287 containerd[1426]: time="2024-07-02T00:03:32.480243551Z" level=info msg="StartContainer for \"a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed\" returns successfully" Jul 2 00:03:32.567258 kubelet[2517]: E0702 00:03:32.566883 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:33.051441 systemd[1]: cri-containerd-a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed.scope: Deactivated successfully. Jul 2 00:03:33.054009 kubelet[2517]: I0702 00:03:33.053837 2517 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:03:33.080622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed-rootfs.mount: Deactivated successfully. Jul 2 00:03:33.086220 kubelet[2517]: I0702 00:03:33.085688 2517 topology_manager.go:215] "Topology Admit Handler" podUID="90c438d1-b083-497d-a344-5fac16fe8bda" podNamespace="kube-system" podName="coredns-76f75df574-md95h" Jul 2 00:03:33.088373 kubelet[2517]: I0702 00:03:33.087515 2517 topology_manager.go:215] "Topology Admit Handler" podUID="29823086-9a6f-43f9-9bc0-93ad25deb8fe" podNamespace="kube-system" podName="coredns-76f75df574-hwst2" Jul 2 00:03:33.088503 kubelet[2517]: I0702 00:03:33.088415 2517 topology_manager.go:215] "Topology Admit Handler" podUID="f4df34b6-4336-4a49-a4ba-110e2697cd8a" podNamespace="calico-system" podName="calico-kube-controllers-7fcdbd9947-zhk75" Jul 2 00:03:33.097592 systemd[1]: Created slice kubepods-burstable-pod90c438d1_b083_497d_a344_5fac16fe8bda.slice - libcontainer container kubepods-burstable-pod90c438d1_b083_497d_a344_5fac16fe8bda.slice. Jul 2 00:03:33.106400 systemd[1]: Created slice kubepods-besteffort-podf4df34b6_4336_4a49_a4ba_110e2697cd8a.slice - libcontainer container kubepods-besteffort-podf4df34b6_4336_4a49_a4ba_110e2697cd8a.slice. Jul 2 00:03:33.110371 systemd[1]: Created slice kubepods-burstable-pod29823086_9a6f_43f9_9bc0_93ad25deb8fe.slice - libcontainer container kubepods-burstable-pod29823086_9a6f_43f9_9bc0_93ad25deb8fe.slice. Jul 2 00:03:33.214422 containerd[1426]: time="2024-07-02T00:03:33.214337759Z" level=info msg="shim disconnected" id=a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed namespace=k8s.io Jul 2 00:03:33.214422 containerd[1426]: time="2024-07-02T00:03:33.214409719Z" level=warning msg="cleaning up after shim disconnected" id=a272303af0451a322e5d85088161817cc6b27a9c6a5b7dc797a78cfa52904aed namespace=k8s.io Jul 2 00:03:33.214422 containerd[1426]: time="2024-07-02T00:03:33.214419599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:03:33.266978 kubelet[2517]: I0702 00:03:33.266885 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knghv\" (UniqueName: \"kubernetes.io/projected/f4df34b6-4336-4a49-a4ba-110e2697cd8a-kube-api-access-knghv\") pod \"calico-kube-controllers-7fcdbd9947-zhk75\" (UID: \"f4df34b6-4336-4a49-a4ba-110e2697cd8a\") " pod="calico-system/calico-kube-controllers-7fcdbd9947-zhk75" Jul 2 00:03:33.266978 kubelet[2517]: I0702 00:03:33.266946 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29823086-9a6f-43f9-9bc0-93ad25deb8fe-config-volume\") pod \"coredns-76f75df574-hwst2\" (UID: \"29823086-9a6f-43f9-9bc0-93ad25deb8fe\") " pod="kube-system/coredns-76f75df574-hwst2" Jul 2 00:03:33.267213 kubelet[2517]: I0702 00:03:33.267036 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4df34b6-4336-4a49-a4ba-110e2697cd8a-tigera-ca-bundle\") pod \"calico-kube-controllers-7fcdbd9947-zhk75\" (UID: \"f4df34b6-4336-4a49-a4ba-110e2697cd8a\") " pod="calico-system/calico-kube-controllers-7fcdbd9947-zhk75" Jul 2 00:03:33.267213 kubelet[2517]: I0702 00:03:33.267085 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90c438d1-b083-497d-a344-5fac16fe8bda-config-volume\") pod \"coredns-76f75df574-md95h\" (UID: \"90c438d1-b083-497d-a344-5fac16fe8bda\") " pod="kube-system/coredns-76f75df574-md95h" Jul 2 00:03:33.267213 kubelet[2517]: I0702 00:03:33.267107 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqb2m\" (UniqueName: \"kubernetes.io/projected/90c438d1-b083-497d-a344-5fac16fe8bda-kube-api-access-pqb2m\") pod \"coredns-76f75df574-md95h\" (UID: \"90c438d1-b083-497d-a344-5fac16fe8bda\") " pod="kube-system/coredns-76f75df574-md95h" Jul 2 00:03:33.267213 kubelet[2517]: I0702 00:03:33.267128 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnd6q\" (UniqueName: \"kubernetes.io/projected/29823086-9a6f-43f9-9bc0-93ad25deb8fe-kube-api-access-tnd6q\") pod \"coredns-76f75df574-hwst2\" (UID: \"29823086-9a6f-43f9-9bc0-93ad25deb8fe\") " pod="kube-system/coredns-76f75df574-hwst2" Jul 2 00:03:33.413212 kubelet[2517]: E0702 00:03:33.412951 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:33.413949 containerd[1426]: time="2024-07-02T00:03:33.413673543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-md95h,Uid:90c438d1-b083-497d-a344-5fac16fe8bda,Namespace:kube-system,Attempt:0,}" Jul 2 00:03:33.413949 containerd[1426]: time="2024-07-02T00:03:33.413792623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcdbd9947-zhk75,Uid:f4df34b6-4336-4a49-a4ba-110e2697cd8a,Namespace:calico-system,Attempt:0,}" Jul 2 00:03:33.419124 kubelet[2517]: E0702 00:03:33.418703 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:33.419806 containerd[1426]: time="2024-07-02T00:03:33.419712065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwst2,Uid:29823086-9a6f-43f9-9bc0-93ad25deb8fe,Namespace:kube-system,Attempt:0,}" Jul 2 00:03:33.481046 systemd[1]: Created slice kubepods-besteffort-poddeefda5b_5363_476d_b5c8_1f67ee1aea37.slice - libcontainer container kubepods-besteffort-poddeefda5b_5363_476d_b5c8_1f67ee1aea37.slice. Jul 2 00:03:33.484704 containerd[1426]: time="2024-07-02T00:03:33.484652126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q228g,Uid:deefda5b-5363-476d-b5c8-1f67ee1aea37,Namespace:calico-system,Attempt:0,}" Jul 2 00:03:33.572016 kubelet[2517]: E0702 00:03:33.571984 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:33.577835 containerd[1426]: time="2024-07-02T00:03:33.577363636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:03:33.770875 containerd[1426]: time="2024-07-02T00:03:33.770818218Z" level=error msg="Failed to destroy network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.771630 containerd[1426]: time="2024-07-02T00:03:33.771202739Z" level=error msg="Failed to destroy network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.771630 containerd[1426]: time="2024-07-02T00:03:33.771350779Z" level=error msg="encountered an error cleaning up failed sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.771630 containerd[1426]: time="2024-07-02T00:03:33.771404299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcdbd9947-zhk75,Uid:f4df34b6-4336-4a49-a4ba-110e2697cd8a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.771630 containerd[1426]: time="2024-07-02T00:03:33.771488899Z" level=error msg="encountered an error cleaning up failed sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.771630 containerd[1426]: time="2024-07-02T00:03:33.771533139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q228g,Uid:deefda5b-5363-476d-b5c8-1f67ee1aea37,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.771843 kubelet[2517]: E0702 00:03:33.771651 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.771843 kubelet[2517]: E0702 00:03:33.771711 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcdbd9947-zhk75" Jul 2 00:03:33.771843 kubelet[2517]: E0702 00:03:33.771746 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcdbd9947-zhk75" Jul 2 00:03:33.771939 kubelet[2517]: E0702 00:03:33.771819 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fcdbd9947-zhk75_calico-system(f4df34b6-4336-4a49-a4ba-110e2697cd8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fcdbd9947-zhk75_calico-system(f4df34b6-4336-4a49-a4ba-110e2697cd8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcdbd9947-zhk75" podUID="f4df34b6-4336-4a49-a4ba-110e2697cd8a" Jul 2 00:03:33.772178 kubelet[2517]: E0702 00:03:33.772097 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.773210 kubelet[2517]: E0702 00:03:33.772307 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q228g" Jul 2 00:03:33.773210 kubelet[2517]: E0702 00:03:33.772335 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-q228g" Jul 2 00:03:33.773210 kubelet[2517]: E0702 00:03:33.772395 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-q228g_calico-system(deefda5b-5363-476d-b5c8-1f67ee1aea37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-q228g_calico-system(deefda5b-5363-476d-b5c8-1f67ee1aea37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:33.775854 containerd[1426]: time="2024-07-02T00:03:33.775805660Z" level=error msg="Failed to destroy network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.776471 containerd[1426]: time="2024-07-02T00:03:33.776433460Z" level=error msg="encountered an error cleaning up failed sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.776606 containerd[1426]: time="2024-07-02T00:03:33.776583580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwst2,Uid:29823086-9a6f-43f9-9bc0-93ad25deb8fe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.777346 kubelet[2517]: E0702 00:03:33.777314 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.777433 kubelet[2517]: E0702 00:03:33.777366 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hwst2" Jul 2 00:03:33.777433 kubelet[2517]: E0702 00:03:33.777400 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hwst2" Jul 2 00:03:33.777517 kubelet[2517]: E0702 00:03:33.777443 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hwst2_kube-system(29823086-9a6f-43f9-9bc0-93ad25deb8fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hwst2_kube-system(29823086-9a6f-43f9-9bc0-93ad25deb8fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hwst2" podUID="29823086-9a6f-43f9-9bc0-93ad25deb8fe" Jul 2 00:03:33.778852 containerd[1426]: time="2024-07-02T00:03:33.778631661Z" level=error msg="Failed to destroy network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.779474 containerd[1426]: time="2024-07-02T00:03:33.779257141Z" level=error msg="encountered an error cleaning up failed sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.779474 containerd[1426]: time="2024-07-02T00:03:33.779322221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-md95h,Uid:90c438d1-b083-497d-a344-5fac16fe8bda,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.779571 kubelet[2517]: E0702 00:03:33.779551 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:33.779645 kubelet[2517]: E0702 00:03:33.779619 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-md95h" Jul 2 00:03:33.779645 kubelet[2517]: E0702 00:03:33.779643 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-md95h" Jul 2 00:03:33.779837 kubelet[2517]: E0702 00:03:33.779798 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-md95h_kube-system(90c438d1-b083-497d-a344-5fac16fe8bda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-md95h_kube-system(90c438d1-b083-497d-a344-5fac16fe8bda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-md95h" podUID="90c438d1-b083-497d-a344-5fac16fe8bda" Jul 2 00:03:34.408925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff-shm.mount: Deactivated successfully. Jul 2 00:03:34.409017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37-shm.mount: Deactivated successfully. Jul 2 00:03:34.409073 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c-shm.mount: Deactivated successfully. Jul 2 00:03:34.576172 kubelet[2517]: I0702 00:03:34.574799 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:34.576767 containerd[1426]: time="2024-07-02T00:03:34.575787666Z" level=info msg="StopPodSandbox for \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\"" Jul 2 00:03:34.576767 containerd[1426]: time="2024-07-02T00:03:34.575990186Z" level=info msg="Ensure that sandbox 3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff in task-service has been cleanup successfully" Jul 2 00:03:34.577674 kubelet[2517]: I0702 00:03:34.576522 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:34.578422 containerd[1426]: time="2024-07-02T00:03:34.578350027Z" level=info msg="StopPodSandbox for \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\"" Jul 2 00:03:34.578695 containerd[1426]: time="2024-07-02T00:03:34.578574547Z" level=info msg="Ensure that sandbox 02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3 in task-service has been cleanup successfully" Jul 2 00:03:34.579189 kubelet[2517]: I0702 00:03:34.579163 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:34.580165 containerd[1426]: time="2024-07-02T00:03:34.579719028Z" level=info msg="StopPodSandbox for \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\"" Jul 2 00:03:34.580165 containerd[1426]: time="2024-07-02T00:03:34.579995628Z" level=info msg="Ensure that sandbox 74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37 in task-service has been cleanup successfully" Jul 2 00:03:34.583379 kubelet[2517]: I0702 00:03:34.582969 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:03:34.583643 containerd[1426]: time="2024-07-02T00:03:34.583595149Z" level=info msg="StopPodSandbox for \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\"" Jul 2 00:03:34.584530 containerd[1426]: time="2024-07-02T00:03:34.584462389Z" level=info msg="Ensure that sandbox 461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c in task-service has been cleanup successfully" Jul 2 00:03:34.619110 containerd[1426]: time="2024-07-02T00:03:34.619049559Z" level=error msg="StopPodSandbox for \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\" failed" error="failed to destroy network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:34.619407 kubelet[2517]: E0702 00:03:34.619313 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:34.619407 kubelet[2517]: E0702 00:03:34.619363 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37"} Jul 2 00:03:34.619407 kubelet[2517]: E0702 00:03:34.619399 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4df34b6-4336-4a49-a4ba-110e2697cd8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:03:34.619710 kubelet[2517]: E0702 00:03:34.619427 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4df34b6-4336-4a49-a4ba-110e2697cd8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcdbd9947-zhk75" podUID="f4df34b6-4336-4a49-a4ba-110e2697cd8a" Jul 2 00:03:34.623593 containerd[1426]: time="2024-07-02T00:03:34.623545641Z" level=error msg="StopPodSandbox for \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\" failed" error="failed to destroy network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:34.624029 kubelet[2517]: E0702 00:03:34.623893 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:03:34.624029 kubelet[2517]: E0702 00:03:34.623935 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c"} Jul 2 00:03:34.624029 kubelet[2517]: E0702 00:03:34.623970 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90c438d1-b083-497d-a344-5fac16fe8bda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:03:34.624029 kubelet[2517]: E0702 00:03:34.623996 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90c438d1-b083-497d-a344-5fac16fe8bda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-md95h" podUID="90c438d1-b083-497d-a344-5fac16fe8bda" Jul 2 00:03:34.624991 containerd[1426]: time="2024-07-02T00:03:34.624948961Z" level=error msg="StopPodSandbox for \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\" failed" error="failed to destroy network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:34.625184 kubelet[2517]: E0702 00:03:34.625119 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:34.625184 kubelet[2517]: E0702 00:03:34.625164 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff"} Jul 2 00:03:34.625260 kubelet[2517]: E0702 00:03:34.625195 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29823086-9a6f-43f9-9bc0-93ad25deb8fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:03:34.625260 kubelet[2517]: E0702 00:03:34.625222 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29823086-9a6f-43f9-9bc0-93ad25deb8fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hwst2" podUID="29823086-9a6f-43f9-9bc0-93ad25deb8fe" Jul 2 00:03:34.637000 containerd[1426]: time="2024-07-02T00:03:34.636956285Z" level=error msg="StopPodSandbox for \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\" failed" error="failed to destroy network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:03:34.637379 kubelet[2517]: E0702 00:03:34.637350 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:34.637456 kubelet[2517]: E0702 00:03:34.637398 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3"} Jul 2 00:03:34.637513 kubelet[2517]: E0702 00:03:34.637460 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"deefda5b-5363-476d-b5c8-1f67ee1aea37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:03:34.637513 kubelet[2517]: E0702 00:03:34.637488 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"deefda5b-5363-476d-b5c8-1f67ee1aea37\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-q228g" podUID="deefda5b-5363-476d-b5c8-1f67ee1aea37" Jul 2 00:03:35.922259 systemd[1]: Started sshd@7-10.0.0.44:22-10.0.0.1:58430.service - OpenSSH per-connection server daemon (10.0.0.1:58430). Jul 2 00:03:35.986655 sshd[3900]: Accepted publickey for core from 10.0.0.1 port 58430 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:35.988648 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:35.994308 systemd-logind[1416]: New session 8 of user core. Jul 2 00:03:36.002344 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:03:36.160098 sshd[3900]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:36.164820 systemd-logind[1416]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:03:36.165416 systemd[1]: sshd@7-10.0.0.44:22-10.0.0.1:58430.service: Deactivated successfully. Jul 2 00:03:36.168259 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:03:36.169546 systemd-logind[1416]: Removed session 8. Jul 2 00:03:37.125202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547925763.mount: Deactivated successfully. Jul 2 00:03:37.314034 containerd[1426]: time="2024-07-02T00:03:37.313685622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:37.315627 containerd[1426]: time="2024-07-02T00:03:37.315432222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 00:03:37.318189 containerd[1426]: time="2024-07-02T00:03:37.316712702Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:37.371942 containerd[1426]: time="2024-07-02T00:03:37.371630236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:37.373342 containerd[1426]: time="2024-07-02T00:03:37.373217997Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.79580276s" Jul 2 00:03:37.373342 containerd[1426]: time="2024-07-02T00:03:37.373256037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 00:03:37.383130 containerd[1426]: time="2024-07-02T00:03:37.382185519Z" level=info msg="CreateContainer within sandbox \"b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:03:37.408478 containerd[1426]: time="2024-07-02T00:03:37.408422525Z" level=info msg="CreateContainer within sandbox \"b39f5d05db26011da78125596e601838076184b969e59cb317c76677394711c0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"88c196d369a41a53c9dca4a961a8af5b9b40f40c010098e506bfd5af07e97f64\"" Jul 2 00:03:37.412074 containerd[1426]: time="2024-07-02T00:03:37.408993925Z" level=info msg="StartContainer for \"88c196d369a41a53c9dca4a961a8af5b9b40f40c010098e506bfd5af07e97f64\"" Jul 2 00:03:37.456336 systemd[1]: Started cri-containerd-88c196d369a41a53c9dca4a961a8af5b9b40f40c010098e506bfd5af07e97f64.scope - libcontainer container 88c196d369a41a53c9dca4a961a8af5b9b40f40c010098e506bfd5af07e97f64. Jul 2 00:03:37.492331 containerd[1426]: time="2024-07-02T00:03:37.492084426Z" level=info msg="StartContainer for \"88c196d369a41a53c9dca4a961a8af5b9b40f40c010098e506bfd5af07e97f64\" returns successfully" Jul 2 00:03:37.593379 kubelet[2517]: E0702 00:03:37.593335 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:37.640555 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:03:37.640665 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:03:38.594267 kubelet[2517]: I0702 00:03:38.594230 2517 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:03:38.594997 kubelet[2517]: E0702 00:03:38.594960 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:41.180614 systemd[1]: Started sshd@8-10.0.0.44:22-10.0.0.1:56130.service - OpenSSH per-connection server daemon (10.0.0.1:56130). Jul 2 00:03:41.228219 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 56130 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:41.229518 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:41.236304 systemd-logind[1416]: New session 9 of user core. Jul 2 00:03:41.246816 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:03:41.385013 sshd[4177]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:41.388950 systemd[1]: sshd@8-10.0.0.44:22-10.0.0.1:56130.service: Deactivated successfully. Jul 2 00:03:41.391009 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:03:41.393034 systemd-logind[1416]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:03:41.395404 systemd-logind[1416]: Removed session 9. Jul 2 00:03:46.403593 systemd[1]: Started sshd@9-10.0.0.44:22-10.0.0.1:56138.service - OpenSSH per-connection server daemon (10.0.0.1:56138). Jul 2 00:03:46.442423 sshd[4298]: Accepted publickey for core from 10.0.0.1 port 56138 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:46.443900 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:46.451967 systemd-logind[1416]: New session 10 of user core. Jul 2 00:03:46.468941 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:03:46.627670 sshd[4298]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:46.635701 systemd[1]: sshd@9-10.0.0.44:22-10.0.0.1:56138.service: Deactivated successfully. Jul 2 00:03:46.638289 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:03:46.640175 systemd-logind[1416]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:03:46.646661 systemd[1]: Started sshd@10-10.0.0.44:22-10.0.0.1:56142.service - OpenSSH per-connection server daemon (10.0.0.1:56142). Jul 2 00:03:46.648289 systemd-logind[1416]: Removed session 10. Jul 2 00:03:46.683598 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 56142 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:46.685338 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:46.689505 systemd-logind[1416]: New session 11 of user core. Jul 2 00:03:46.699701 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:03:46.858768 sshd[4338]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:46.869090 systemd[1]: sshd@10-10.0.0.44:22-10.0.0.1:56142.service: Deactivated successfully. Jul 2 00:03:46.872677 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:03:46.874934 systemd-logind[1416]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:03:46.885163 systemd[1]: Started sshd@11-10.0.0.44:22-10.0.0.1:56146.service - OpenSSH per-connection server daemon (10.0.0.1:56146). Jul 2 00:03:46.889129 systemd-logind[1416]: Removed session 11. Jul 2 00:03:46.920280 sshd[4351]: Accepted publickey for core from 10.0.0.1 port 56146 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:46.921678 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:46.928890 systemd-logind[1416]: New session 12 of user core. Jul 2 00:03:46.938347 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:03:47.084320 sshd[4351]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:47.088092 systemd[1]: sshd@11-10.0.0.44:22-10.0.0.1:56146.service: Deactivated successfully. Jul 2 00:03:47.090015 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:03:47.090874 systemd-logind[1416]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:03:47.091921 systemd-logind[1416]: Removed session 12. Jul 2 00:03:47.237359 kubelet[2517]: I0702 00:03:47.237111 2517 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:03:47.237962 kubelet[2517]: E0702 00:03:47.237785 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:47.251260 kubelet[2517]: I0702 00:03:47.250576 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-qltzj" podStartSLOduration=11.411174018 podStartE2EDuration="21.250536311s" podCreationTimestamp="2024-07-02 00:03:26 +0000 UTC" firstStartedPulling="2024-07-02 00:03:27.534271664 +0000 UTC m=+28.193457168" lastFinishedPulling="2024-07-02 00:03:37.373633957 +0000 UTC m=+38.032819461" observedRunningTime="2024-07-02 00:03:37.609089735 +0000 UTC m=+38.268275239" watchObservedRunningTime="2024-07-02 00:03:47.250536311 +0000 UTC m=+47.909721775" Jul 2 00:03:47.452909 containerd[1426]: time="2024-07-02T00:03:47.452863137Z" level=info msg="StopPodSandbox for \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\"" Jul 2 00:03:47.512800 systemd-networkd[1368]: vxlan.calico: Link UP Jul 2 00:03:47.512806 systemd-networkd[1368]: vxlan.calico: Gained carrier Jul 2 00:03:47.611967 kubelet[2517]: E0702 00:03:47.611897 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:47.868335 kubelet[2517]: E0702 00:03:47.866875 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:47.867386 systemd[1]: run-netns-cni\x2dec6500e1\x2d6121\x2d9949\x2d445a\x2d6a82958d04f6.mount: Deactivated successfully. Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.669 [INFO][4416] k8s.go 608: Cleaning up netns ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.669 [INFO][4416] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" iface="eth0" netns="/var/run/netns/cni-ec6500e1-6121-9949-445a-6a82958d04f6" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.669 [INFO][4416] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" iface="eth0" netns="/var/run/netns/cni-ec6500e1-6121-9949-445a-6a82958d04f6" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.670 [INFO][4416] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" iface="eth0" netns="/var/run/netns/cni-ec6500e1-6121-9949-445a-6a82958d04f6" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.670 [INFO][4416] k8s.go 615: Releasing IP address(es) ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.670 [INFO][4416] utils.go 188: Calico CNI releasing IP address ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.845 [INFO][4505] ipam_plugin.go 411: Releasing address using handleID ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.845 [INFO][4505] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.846 [INFO][4505] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.858 [WARNING][4505] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.858 [INFO][4505] ipam_plugin.go 439: Releasing address using workloadID ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.860 [INFO][4505] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:47.869045 containerd[1426]: 2024-07-02 00:03:47.862 [INFO][4416] k8s.go 621: Teardown processing complete. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:03:47.869045 containerd[1426]: time="2024-07-02T00:03:47.864572951Z" level=info msg="TearDown network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\" successfully" Jul 2 00:03:47.869045 containerd[1426]: time="2024-07-02T00:03:47.864602071Z" level=info msg="StopPodSandbox for \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\" returns successfully" Jul 2 00:03:47.869045 containerd[1426]: time="2024-07-02T00:03:47.868266672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-md95h,Uid:90c438d1-b083-497d-a344-5fac16fe8bda,Namespace:kube-system,Attempt:1,}" Jul 2 00:03:48.000020 systemd-networkd[1368]: calidd3b19ed3c8: Link UP Jul 2 00:03:48.000836 systemd-networkd[1368]: calidd3b19ed3c8: Gained carrier Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.921 [INFO][4533] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--md95h-eth0 coredns-76f75df574- kube-system 90c438d1-b083-497d-a344-5fac16fe8bda 905 0 2024-07-02 00:03:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-md95h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd3b19ed3c8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.922 [INFO][4533] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.951 [INFO][4545] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" HandleID="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.963 [INFO][4545] ipam_plugin.go 264: Auto assigning IP ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" HandleID="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012fdf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-md95h", "timestamp":"2024-07-02 00:03:47.951736323 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.964 [INFO][4545] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.964 [INFO][4545] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.964 [INFO][4545] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.966 [INFO][4545] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.973 [INFO][4545] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.978 [INFO][4545] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.980 [INFO][4545] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.982 [INFO][4545] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.982 [INFO][4545] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.984 [INFO][4545] ipam.go 1685: Creating new handle: k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8 Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.987 [INFO][4545] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.992 [INFO][4545] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.993 [INFO][4545] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" host="localhost" Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.993 [INFO][4545] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:48.013668 containerd[1426]: 2024-07-02 00:03:47.993 [INFO][4545] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" HandleID="k8s-pod-network.53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:48.014290 containerd[1426]: 2024-07-02 00:03:47.995 [INFO][4533] k8s.go 386: Populated endpoint ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--md95h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"90c438d1-b083-497d-a344-5fac16fe8bda", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-md95h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3b19ed3c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:48.014290 containerd[1426]: 2024-07-02 00:03:47.996 [INFO][4533] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:48.014290 containerd[1426]: 2024-07-02 00:03:47.997 [INFO][4533] dataplane_linux.go 68: Setting the host side veth name to calidd3b19ed3c8 ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:48.014290 containerd[1426]: 2024-07-02 00:03:47.999 [INFO][4533] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:48.014290 containerd[1426]: 2024-07-02 00:03:47.999 [INFO][4533] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--md95h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"90c438d1-b083-497d-a344-5fac16fe8bda", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8", Pod:"coredns-76f75df574-md95h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3b19ed3c8", MAC:"8e:71:ba:ea:dc:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:48.014290 containerd[1426]: 2024-07-02 00:03:48.010 [INFO][4533] k8s.go 500: Wrote updated endpoint to datastore ContainerID="53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8" Namespace="kube-system" Pod="coredns-76f75df574-md95h" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:03:48.034867 containerd[1426]: time="2024-07-02T00:03:48.034714733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:48.034867 containerd[1426]: time="2024-07-02T00:03:48.034802893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:48.034867 containerd[1426]: time="2024-07-02T00:03:48.034819653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:48.034867 containerd[1426]: time="2024-07-02T00:03:48.034837693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:48.071358 systemd[1]: Started cri-containerd-53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8.scope - libcontainer container 53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8. Jul 2 00:03:48.083328 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:03:48.105248 containerd[1426]: time="2024-07-02T00:03:48.105203902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-md95h,Uid:90c438d1-b083-497d-a344-5fac16fe8bda,Namespace:kube-system,Attempt:1,} returns sandbox id \"53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8\"" Jul 2 00:03:48.106073 kubelet[2517]: E0702 00:03:48.106036 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:48.108350 containerd[1426]: time="2024-07-02T00:03:48.108311102Z" level=info msg="CreateContainer within sandbox \"53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:03:48.127717 containerd[1426]: time="2024-07-02T00:03:48.127526665Z" level=info msg="CreateContainer within sandbox \"53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff0ddf0c69f2ccd6dc854d580b8424fa5ed1c1c8e1be40f5e0b973ca11b71bba\"" Jul 2 00:03:48.128421 containerd[1426]: time="2024-07-02T00:03:48.128368705Z" level=info msg="StartContainer for \"ff0ddf0c69f2ccd6dc854d580b8424fa5ed1c1c8e1be40f5e0b973ca11b71bba\"" Jul 2 00:03:48.151379 systemd[1]: Started cri-containerd-ff0ddf0c69f2ccd6dc854d580b8424fa5ed1c1c8e1be40f5e0b973ca11b71bba.scope - libcontainer container ff0ddf0c69f2ccd6dc854d580b8424fa5ed1c1c8e1be40f5e0b973ca11b71bba. Jul 2 00:03:48.174696 containerd[1426]: time="2024-07-02T00:03:48.174650350Z" level=info msg="StartContainer for \"ff0ddf0c69f2ccd6dc854d580b8424fa5ed1c1c8e1be40f5e0b973ca11b71bba\" returns successfully" Jul 2 00:03:48.451245 containerd[1426]: time="2024-07-02T00:03:48.451010584Z" level=info msg="StopPodSandbox for \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\"" Jul 2 00:03:48.451534 containerd[1426]: time="2024-07-02T00:03:48.451039104Z" level=info msg="StopPodSandbox for \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\"" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.512 [INFO][4683] k8s.go 608: Cleaning up netns ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.513 [INFO][4683] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" iface="eth0" netns="/var/run/netns/cni-0a0cfbda-c342-e6ab-706b-557929c18b1f" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.513 [INFO][4683] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" iface="eth0" netns="/var/run/netns/cni-0a0cfbda-c342-e6ab-706b-557929c18b1f" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.513 [INFO][4683] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" iface="eth0" netns="/var/run/netns/cni-0a0cfbda-c342-e6ab-706b-557929c18b1f" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.513 [INFO][4683] k8s.go 615: Releasing IP address(es) ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.513 [INFO][4683] utils.go 188: Calico CNI releasing IP address ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.539 [INFO][4697] ipam_plugin.go 411: Releasing address using handleID ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.539 [INFO][4697] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.539 [INFO][4697] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.549 [WARNING][4697] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.549 [INFO][4697] ipam_plugin.go 439: Releasing address using workloadID ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.551 [INFO][4697] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:48.555234 containerd[1426]: 2024-07-02 00:03:48.553 [INFO][4683] k8s.go 621: Teardown processing complete. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:48.556249 containerd[1426]: time="2024-07-02T00:03:48.555417757Z" level=info msg="TearDown network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\" successfully" Jul 2 00:03:48.556249 containerd[1426]: time="2024-07-02T00:03:48.555446877Z" level=info msg="StopPodSandbox for \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\" returns successfully" Jul 2 00:03:48.556249 containerd[1426]: time="2024-07-02T00:03:48.556105677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q228g,Uid:deefda5b-5363-476d-b5c8-1f67ee1aea37,Namespace:calico-system,Attempt:1,}" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.510 [INFO][4680] k8s.go 608: Cleaning up netns ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.511 [INFO][4680] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" iface="eth0" netns="/var/run/netns/cni-dbacafe4-9596-ff52-6d17-dd63f3bfe26e" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.512 [INFO][4680] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" iface="eth0" netns="/var/run/netns/cni-dbacafe4-9596-ff52-6d17-dd63f3bfe26e" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.512 [INFO][4680] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" iface="eth0" netns="/var/run/netns/cni-dbacafe4-9596-ff52-6d17-dd63f3bfe26e" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.513 [INFO][4680] k8s.go 615: Releasing IP address(es) ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.513 [INFO][4680] utils.go 188: Calico CNI releasing IP address ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.541 [INFO][4698] ipam_plugin.go 411: Releasing address using handleID ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.543 [INFO][4698] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.551 [INFO][4698] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.561 [WARNING][4698] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.561 [INFO][4698] ipam_plugin.go 439: Releasing address using workloadID ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.563 [INFO][4698] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:48.567105 containerd[1426]: 2024-07-02 00:03:48.564 [INFO][4680] k8s.go 621: Teardown processing complete. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:48.567642 containerd[1426]: time="2024-07-02T00:03:48.567609038Z" level=info msg="TearDown network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\" successfully" Jul 2 00:03:48.567675 containerd[1426]: time="2024-07-02T00:03:48.567642158Z" level=info msg="StopPodSandbox for \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\" returns successfully" Jul 2 00:03:48.568387 containerd[1426]: time="2024-07-02T00:03:48.568335839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcdbd9947-zhk75,Uid:f4df34b6-4336-4a49-a4ba-110e2697cd8a,Namespace:calico-system,Attempt:1,}" Jul 2 00:03:48.625655 kubelet[2517]: E0702 00:03:48.625623 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:48.643959 kubelet[2517]: I0702 00:03:48.643888 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-md95h" podStartSLOduration=35.643524768 podStartE2EDuration="35.643524768s" podCreationTimestamp="2024-07-02 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:03:48.642224688 +0000 UTC m=+49.301410192" watchObservedRunningTime="2024-07-02 00:03:48.643524768 +0000 UTC m=+49.302710272" Jul 2 00:03:48.728386 systemd-networkd[1368]: calib22917b8c11: Link UP Jul 2 00:03:48.728882 systemd-networkd[1368]: calib22917b8c11: Gained carrier Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.617 [INFO][4713] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--q228g-eth0 csi-node-driver- calico-system deefda5b-5363-476d-b5c8-1f67ee1aea37 928 0 2024-07-02 00:03:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-q228g eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib22917b8c11 [] []}} ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.617 [INFO][4713] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.673 [INFO][4740] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" HandleID="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.687 [INFO][4740] ipam_plugin.go 264: Auto assigning IP ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" HandleID="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Workload="localhost-k8s-csi--node--driver--q228g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006adbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-q228g", "timestamp":"2024-07-02 00:03:48.673642851 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.687 [INFO][4740] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.687 [INFO][4740] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.687 [INFO][4740] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.690 [INFO][4740] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.696 [INFO][4740] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.703 [INFO][4740] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.706 [INFO][4740] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.709 [INFO][4740] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.709 [INFO][4740] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.711 [INFO][4740] ipam.go 1685: Creating new handle: k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66 Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.715 [INFO][4740] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.722 [INFO][4740] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.722 [INFO][4740] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" host="localhost" Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.722 [INFO][4740] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:48.743776 containerd[1426]: 2024-07-02 00:03:48.722 [INFO][4740] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" HandleID="k8s-pod-network.f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.746074 containerd[1426]: 2024-07-02 00:03:48.725 [INFO][4713] k8s.go 386: Populated endpoint ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q228g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deefda5b-5363-476d-b5c8-1f67ee1aea37", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-q228g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib22917b8c11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:48.746074 containerd[1426]: 2024-07-02 00:03:48.725 [INFO][4713] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.746074 containerd[1426]: 2024-07-02 00:03:48.725 [INFO][4713] dataplane_linux.go 68: Setting the host side veth name to calib22917b8c11 ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.746074 containerd[1426]: 2024-07-02 00:03:48.729 [INFO][4713] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.746074 containerd[1426]: 2024-07-02 00:03:48.729 [INFO][4713] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q228g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deefda5b-5363-476d-b5c8-1f67ee1aea37", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66", Pod:"csi-node-driver-q228g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib22917b8c11", MAC:"4e:37:c2:da:66:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:48.746074 containerd[1426]: 2024-07-02 00:03:48.740 [INFO][4713] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66" Namespace="calico-system" Pod="csi-node-driver-q228g" WorkloadEndpoint="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:48.773665 systemd-networkd[1368]: cali05f797c0181: Link UP Jul 2 00:03:48.773907 systemd-networkd[1368]: cali05f797c0181: Gained carrier Jul 2 00:03:48.782498 containerd[1426]: time="2024-07-02T00:03:48.782319505Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:48.782624 containerd[1426]: time="2024-07-02T00:03:48.782538985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:48.782923 containerd[1426]: time="2024-07-02T00:03:48.782588705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:48.782923 containerd[1426]: time="2024-07-02T00:03:48.782629625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.651 [INFO][4728] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0 calico-kube-controllers-7fcdbd9947- calico-system f4df34b6-4336-4a49-a4ba-110e2697cd8a 927 0 2024-07-02 00:03:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fcdbd9947 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7fcdbd9947-zhk75 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali05f797c0181 [] []}} ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.651 [INFO][4728] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.696 [INFO][4747] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" HandleID="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.713 [INFO][4747] ipam_plugin.go 264: Auto assigning IP ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" HandleID="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058f890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7fcdbd9947-zhk75", "timestamp":"2024-07-02 00:03:48.696286534 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.715 [INFO][4747] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.722 [INFO][4747] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.722 [INFO][4747] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.725 [INFO][4747] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.730 [INFO][4747] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.735 [INFO][4747] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.751 [INFO][4747] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.754 [INFO][4747] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.754 [INFO][4747] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.756 [INFO][4747] ipam.go 1685: Creating new handle: k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.761 [INFO][4747] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.766 [INFO][4747] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.766 [INFO][4747] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" host="localhost" Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.767 [INFO][4747] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:48.800201 containerd[1426]: 2024-07-02 00:03:48.767 [INFO][4747] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" HandleID="k8s-pod-network.cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.800988 containerd[1426]: 2024-07-02 00:03:48.770 [INFO][4728] k8s.go 386: Populated endpoint ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0", GenerateName:"calico-kube-controllers-7fcdbd9947-", Namespace:"calico-system", SelfLink:"", UID:"f4df34b6-4336-4a49-a4ba-110e2697cd8a", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcdbd9947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7fcdbd9947-zhk75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05f797c0181", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:48.800988 containerd[1426]: 2024-07-02 00:03:48.770 [INFO][4728] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.800988 containerd[1426]: 2024-07-02 00:03:48.770 [INFO][4728] dataplane_linux.go 68: Setting the host side veth name to cali05f797c0181 ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.800988 containerd[1426]: 2024-07-02 00:03:48.773 [INFO][4728] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.800988 containerd[1426]: 2024-07-02 00:03:48.778 [INFO][4728] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0", GenerateName:"calico-kube-controllers-7fcdbd9947-", Namespace:"calico-system", SelfLink:"", UID:"f4df34b6-4336-4a49-a4ba-110e2697cd8a", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcdbd9947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a", Pod:"calico-kube-controllers-7fcdbd9947-zhk75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05f797c0181", MAC:"ea:10:69:b6:13:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:48.800988 containerd[1426]: 2024-07-02 00:03:48.796 [INFO][4728] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a" Namespace="calico-system" Pod="calico-kube-controllers-7fcdbd9947-zhk75" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:48.810379 systemd[1]: Started cri-containerd-f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66.scope - libcontainer container f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66. Jul 2 00:03:48.828561 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:03:48.831004 containerd[1426]: time="2024-07-02T00:03:48.830915791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:48.832481 containerd[1426]: time="2024-07-02T00:03:48.831404191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:48.832481 containerd[1426]: time="2024-07-02T00:03:48.831431991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:48.832481 containerd[1426]: time="2024-07-02T00:03:48.831442551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:48.842615 containerd[1426]: time="2024-07-02T00:03:48.842564832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-q228g,Uid:deefda5b-5363-476d-b5c8-1f67ee1aea37,Namespace:calico-system,Attempt:1,} returns sandbox id \"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66\"" Jul 2 00:03:48.844496 containerd[1426]: time="2024-07-02T00:03:48.844451792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:03:48.853369 systemd[1]: Started cri-containerd-cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a.scope - libcontainer container cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a. Jul 2 00:03:48.865449 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:03:48.871549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834593619.mount: Deactivated successfully. Jul 2 00:03:48.871662 systemd[1]: run-netns-cni\x2d0a0cfbda\x2dc342\x2de6ab\x2d706b\x2d557929c18b1f.mount: Deactivated successfully. Jul 2 00:03:48.871723 systemd[1]: run-netns-cni\x2ddbacafe4\x2d9596\x2dff52\x2d6d17\x2ddd63f3bfe26e.mount: Deactivated successfully. Jul 2 00:03:48.888465 containerd[1426]: time="2024-07-02T00:03:48.888417878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcdbd9947-zhk75,Uid:f4df34b6-4336-4a49-a4ba-110e2697cd8a,Namespace:calico-system,Attempt:1,} returns sandbox id \"cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a\"" Jul 2 00:03:48.956488 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Jul 2 00:03:49.451236 containerd[1426]: time="2024-07-02T00:03:49.451189583Z" level=info msg="StopPodSandbox for \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\"" Jul 2 00:03:49.468577 systemd-networkd[1368]: calidd3b19ed3c8: Gained IPv6LL Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.508 [INFO][4888] k8s.go 608: Cleaning up netns ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.508 [INFO][4888] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" iface="eth0" netns="/var/run/netns/cni-bef80a75-ff30-81a6-1d4a-c352eaae3d74" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.508 [INFO][4888] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" iface="eth0" netns="/var/run/netns/cni-bef80a75-ff30-81a6-1d4a-c352eaae3d74" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.509 [INFO][4888] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" iface="eth0" netns="/var/run/netns/cni-bef80a75-ff30-81a6-1d4a-c352eaae3d74" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.509 [INFO][4888] k8s.go 615: Releasing IP address(es) ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.509 [INFO][4888] utils.go 188: Calico CNI releasing IP address ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.532 [INFO][4896] ipam_plugin.go 411: Releasing address using handleID ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.532 [INFO][4896] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.532 [INFO][4896] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.541 [WARNING][4896] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.541 [INFO][4896] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.542 [INFO][4896] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:49.546183 containerd[1426]: 2024-07-02 00:03:49.544 [INFO][4888] k8s.go 621: Teardown processing complete. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:49.549552 containerd[1426]: time="2024-07-02T00:03:49.546296474Z" level=info msg="TearDown network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\" successfully" Jul 2 00:03:49.549552 containerd[1426]: time="2024-07-02T00:03:49.546333074Z" level=info msg="StopPodSandbox for \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\" returns successfully" Jul 2 00:03:49.549552 containerd[1426]: time="2024-07-02T00:03:49.548733794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwst2,Uid:29823086-9a6f-43f9-9bc0-93ad25deb8fe,Namespace:kube-system,Attempt:1,}" Jul 2 00:03:49.548421 systemd[1]: run-netns-cni\x2dbef80a75\x2dff30\x2d81a6\x2d1d4a\x2dc352eaae3d74.mount: Deactivated successfully. Jul 2 00:03:49.549779 kubelet[2517]: E0702 00:03:49.546651 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:49.629718 kubelet[2517]: E0702 00:03:49.629600 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:49.707509 systemd-networkd[1368]: cali3e4b91b9cd2: Link UP Jul 2 00:03:49.708429 systemd-networkd[1368]: cali3e4b91b9cd2: Gained carrier Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.616 [INFO][4904] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--hwst2-eth0 coredns-76f75df574- kube-system 29823086-9a6f-43f9-9bc0-93ad25deb8fe 953 0 2024-07-02 00:03:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-hwst2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3e4b91b9cd2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.616 [INFO][4904] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.646 [INFO][4918] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" HandleID="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.657 [INFO][4918] ipam_plugin.go 264: Auto assigning IP ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" HandleID="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503d60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-hwst2", "timestamp":"2024-07-02 00:03:49.646217206 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.657 [INFO][4918] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.658 [INFO][4918] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.658 [INFO][4918] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.661 [INFO][4918] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.666 [INFO][4918] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.672 [INFO][4918] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.675 [INFO][4918] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.677 [INFO][4918] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.677 [INFO][4918] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.679 [INFO][4918] ipam.go 1685: Creating new handle: k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028 Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.682 [INFO][4918] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.702 [INFO][4918] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.702 [INFO][4918] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" host="localhost" Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.703 [INFO][4918] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:49.720370 containerd[1426]: 2024-07-02 00:03:49.703 [INFO][4918] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" HandleID="k8s-pod-network.84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.721331 containerd[1426]: 2024-07-02 00:03:49.704 [INFO][4904] k8s.go 386: Populated endpoint ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hwst2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"29823086-9a6f-43f9-9bc0-93ad25deb8fe", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-hwst2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4b91b9cd2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:49.721331 containerd[1426]: 2024-07-02 00:03:49.705 [INFO][4904] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.721331 containerd[1426]: 2024-07-02 00:03:49.705 [INFO][4904] dataplane_linux.go 68: Setting the host side veth name to cali3e4b91b9cd2 ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.721331 containerd[1426]: 2024-07-02 00:03:49.707 [INFO][4904] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.721331 containerd[1426]: 2024-07-02 00:03:49.709 [INFO][4904] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hwst2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"29823086-9a6f-43f9-9bc0-93ad25deb8fe", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028", Pod:"coredns-76f75df574-hwst2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4b91b9cd2", MAC:"f2:75:20:1a:29:fb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:49.721331 containerd[1426]: 2024-07-02 00:03:49.718 [INFO][4904] k8s.go 500: Wrote updated endpoint to datastore ContainerID="84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028" Namespace="kube-system" Pod="coredns-76f75df574-hwst2" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:49.740087 containerd[1426]: time="2024-07-02T00:03:49.739879376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:03:49.740087 containerd[1426]: time="2024-07-02T00:03:49.740034336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:49.740259 containerd[1426]: time="2024-07-02T00:03:49.740121256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:03:49.740259 containerd[1426]: time="2024-07-02T00:03:49.740140656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:03:49.761396 systemd[1]: Started cri-containerd-84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028.scope - libcontainer container 84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028. Jul 2 00:03:49.771976 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 00:03:49.790138 containerd[1426]: time="2024-07-02T00:03:49.790093982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwst2,Uid:29823086-9a6f-43f9-9bc0-93ad25deb8fe,Namespace:kube-system,Attempt:1,} returns sandbox id \"84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028\"" Jul 2 00:03:49.791245 kubelet[2517]: E0702 00:03:49.791219 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:49.794381 containerd[1426]: time="2024-07-02T00:03:49.794321623Z" level=info msg="CreateContainer within sandbox \"84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:03:49.841788 containerd[1426]: time="2024-07-02T00:03:49.841714588Z" level=info msg="CreateContainer within sandbox \"84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa505d9f633465b12d81b2d88a173b5020d2ea700385e62d811a1ca7776add87\"" Jul 2 00:03:49.844197 containerd[1426]: time="2024-07-02T00:03:49.842578988Z" level=info msg="StartContainer for \"fa505d9f633465b12d81b2d88a173b5020d2ea700385e62d811a1ca7776add87\"" Jul 2 00:03:49.871357 systemd[1]: Started cri-containerd-fa505d9f633465b12d81b2d88a173b5020d2ea700385e62d811a1ca7776add87.scope - libcontainer container fa505d9f633465b12d81b2d88a173b5020d2ea700385e62d811a1ca7776add87. Jul 2 00:03:49.903120 containerd[1426]: time="2024-07-02T00:03:49.902537995Z" level=info msg="StartContainer for \"fa505d9f633465b12d81b2d88a173b5020d2ea700385e62d811a1ca7776add87\" returns successfully" Jul 2 00:03:50.012281 containerd[1426]: time="2024-07-02T00:03:50.012220527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:50.012817 containerd[1426]: time="2024-07-02T00:03:50.012772928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 00:03:50.014176 containerd[1426]: time="2024-07-02T00:03:50.014131328Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:50.023091 containerd[1426]: time="2024-07-02T00:03:50.023034849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:50.024277 containerd[1426]: time="2024-07-02T00:03:50.024151409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.179649937s" Jul 2 00:03:50.024277 containerd[1426]: time="2024-07-02T00:03:50.024195249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 00:03:50.024997 containerd[1426]: time="2024-07-02T00:03:50.024746969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:03:50.027271 containerd[1426]: time="2024-07-02T00:03:50.027220089Z" level=info msg="CreateContainer within sandbox \"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:03:50.044345 systemd-networkd[1368]: calib22917b8c11: Gained IPv6LL Jul 2 00:03:50.047264 containerd[1426]: time="2024-07-02T00:03:50.047208171Z" level=info msg="CreateContainer within sandbox \"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ed9f8db8bad093bd0e85390951524ec5ef0ca4ce0b3796cd96f12367a17ba7e6\"" Jul 2 00:03:50.051199 containerd[1426]: time="2024-07-02T00:03:50.049582012Z" level=info msg="StartContainer for \"ed9f8db8bad093bd0e85390951524ec5ef0ca4ce0b3796cd96f12367a17ba7e6\"" Jul 2 00:03:50.080523 systemd[1]: Started cri-containerd-ed9f8db8bad093bd0e85390951524ec5ef0ca4ce0b3796cd96f12367a17ba7e6.scope - libcontainer container ed9f8db8bad093bd0e85390951524ec5ef0ca4ce0b3796cd96f12367a17ba7e6. Jul 2 00:03:50.108668 containerd[1426]: time="2024-07-02T00:03:50.108238898Z" level=info msg="StartContainer for \"ed9f8db8bad093bd0e85390951524ec5ef0ca4ce0b3796cd96f12367a17ba7e6\" returns successfully" Jul 2 00:03:50.556566 systemd-networkd[1368]: cali05f797c0181: Gained IPv6LL Jul 2 00:03:50.634515 kubelet[2517]: E0702 00:03:50.634135 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:50.639309 kubelet[2517]: E0702 00:03:50.639243 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:50.652125 kubelet[2517]: I0702 00:03:50.651729 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hwst2" podStartSLOduration=37.651667396 podStartE2EDuration="37.651667396s" podCreationTimestamp="2024-07-02 00:03:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:03:50.650821076 +0000 UTC m=+51.310006660" watchObservedRunningTime="2024-07-02 00:03:50.651667396 +0000 UTC m=+51.310852900" Jul 2 00:03:51.260396 systemd-networkd[1368]: cali3e4b91b9cd2: Gained IPv6LL Jul 2 00:03:51.578459 containerd[1426]: time="2024-07-02T00:03:51.578313132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:51.582292 containerd[1426]: time="2024-07-02T00:03:51.582243533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 00:03:51.583857 containerd[1426]: time="2024-07-02T00:03:51.583649573Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:51.586242 containerd[1426]: time="2024-07-02T00:03:51.586201893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:51.587209 containerd[1426]: time="2024-07-02T00:03:51.587166853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.562366244s" Jul 2 00:03:51.587209 containerd[1426]: time="2024-07-02T00:03:51.587205373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 00:03:51.587920 containerd[1426]: time="2024-07-02T00:03:51.587884093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:03:51.599668 containerd[1426]: time="2024-07-02T00:03:51.598231254Z" level=info msg="CreateContainer within sandbox \"cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:03:51.623739 containerd[1426]: time="2024-07-02T00:03:51.623681177Z" level=info msg="CreateContainer within sandbox \"cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"567cf914537ddb82bd3e05625c3e63f4995c6dab3b6a1432f419c7205cc19891\"" Jul 2 00:03:51.629127 containerd[1426]: time="2024-07-02T00:03:51.626450177Z" level=info msg="StartContainer for \"567cf914537ddb82bd3e05625c3e63f4995c6dab3b6a1432f419c7205cc19891\"" Jul 2 00:03:51.649391 kubelet[2517]: E0702 00:03:51.648395 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:51.667357 systemd[1]: Started cri-containerd-567cf914537ddb82bd3e05625c3e63f4995c6dab3b6a1432f419c7205cc19891.scope - libcontainer container 567cf914537ddb82bd3e05625c3e63f4995c6dab3b6a1432f419c7205cc19891. Jul 2 00:03:51.723854 containerd[1426]: time="2024-07-02T00:03:51.723658547Z" level=info msg="StartContainer for \"567cf914537ddb82bd3e05625c3e63f4995c6dab3b6a1432f419c7205cc19891\" returns successfully" Jul 2 00:03:52.098263 systemd[1]: Started sshd@12-10.0.0.44:22-10.0.0.1:38812.service - OpenSSH per-connection server daemon (10.0.0.1:38812). Jul 2 00:03:52.144732 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 38812 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:52.146375 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:52.151768 systemd-logind[1416]: New session 13 of user core. Jul 2 00:03:52.162333 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:03:52.293053 sshd[5106]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:52.295750 systemd[1]: sshd@12-10.0.0.44:22-10.0.0.1:38812.service: Deactivated successfully. Jul 2 00:03:52.298688 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:03:52.300341 systemd-logind[1416]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:03:52.302141 systemd-logind[1416]: Removed session 13. Jul 2 00:03:52.635403 containerd[1426]: time="2024-07-02T00:03:52.635353075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:52.635835 containerd[1426]: time="2024-07-02T00:03:52.635814155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 00:03:52.636612 containerd[1426]: time="2024-07-02T00:03:52.636579875Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:52.639227 containerd[1426]: time="2024-07-02T00:03:52.638835595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:03:52.640038 containerd[1426]: time="2024-07-02T00:03:52.640001675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.052076662s" Jul 2 00:03:52.640092 containerd[1426]: time="2024-07-02T00:03:52.640044275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 00:03:52.644828 containerd[1426]: time="2024-07-02T00:03:52.644796076Z" level=info msg="CreateContainer within sandbox \"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:03:52.652825 kubelet[2517]: E0702 00:03:52.652738 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:03:52.665457 containerd[1426]: time="2024-07-02T00:03:52.664819478Z" level=info msg="CreateContainer within sandbox \"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"975d6080d64a31aedea58623c8dade7c66ad204769926fa47361672cf9e02942\"" Jul 2 00:03:52.666252 containerd[1426]: time="2024-07-02T00:03:52.666173958Z" level=info msg="StartContainer for \"975d6080d64a31aedea58623c8dade7c66ad204769926fa47361672cf9e02942\"" Jul 2 00:03:52.671379 kubelet[2517]: I0702 00:03:52.670264 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7fcdbd9947-zhk75" podStartSLOduration=28.972403663 podStartE2EDuration="31.670224638s" podCreationTimestamp="2024-07-02 00:03:21 +0000 UTC" firstStartedPulling="2024-07-02 00:03:48.889793878 +0000 UTC m=+49.548979382" lastFinishedPulling="2024-07-02 00:03:51.587614853 +0000 UTC m=+52.246800357" observedRunningTime="2024-07-02 00:03:52.669775438 +0000 UTC m=+53.328960942" watchObservedRunningTime="2024-07-02 00:03:52.670224638 +0000 UTC m=+53.329410142" Jul 2 00:03:52.704446 systemd[1]: Started cri-containerd-975d6080d64a31aedea58623c8dade7c66ad204769926fa47361672cf9e02942.scope - libcontainer container 975d6080d64a31aedea58623c8dade7c66ad204769926fa47361672cf9e02942. Jul 2 00:03:52.747564 containerd[1426]: time="2024-07-02T00:03:52.747506605Z" level=info msg="StartContainer for \"975d6080d64a31aedea58623c8dade7c66ad204769926fa47361672cf9e02942\" returns successfully" Jul 2 00:03:53.541245 kubelet[2517]: I0702 00:03:53.541200 2517 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:03:53.541245 kubelet[2517]: I0702 00:03:53.541240 2517 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:03:57.304969 systemd[1]: Started sshd@13-10.0.0.44:22-10.0.0.1:38822.service - OpenSSH per-connection server daemon (10.0.0.1:38822). Jul 2 00:03:57.353464 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 38822 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:57.355136 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:57.359050 systemd-logind[1416]: New session 14 of user core. Jul 2 00:03:57.370325 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:03:57.494249 sshd[5195]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:57.504788 systemd[1]: sshd@13-10.0.0.44:22-10.0.0.1:38822.service: Deactivated successfully. Jul 2 00:03:57.507604 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:03:57.510075 systemd-logind[1416]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:03:57.525481 systemd[1]: Started sshd@14-10.0.0.44:22-10.0.0.1:38834.service - OpenSSH per-connection server daemon (10.0.0.1:38834). Jul 2 00:03:57.527411 systemd-logind[1416]: Removed session 14. Jul 2 00:03:57.562261 sshd[5209]: Accepted publickey for core from 10.0.0.1 port 38834 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:57.563501 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:57.567916 systemd-logind[1416]: New session 15 of user core. Jul 2 00:03:57.575382 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:03:57.861163 sshd[5209]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:57.876182 systemd[1]: sshd@14-10.0.0.44:22-10.0.0.1:38834.service: Deactivated successfully. Jul 2 00:03:57.881831 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:03:57.884128 systemd-logind[1416]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:03:57.892522 systemd[1]: Started sshd@15-10.0.0.44:22-10.0.0.1:38840.service - OpenSSH per-connection server daemon (10.0.0.1:38840). Jul 2 00:03:57.893690 systemd-logind[1416]: Removed session 15. Jul 2 00:03:57.936045 sshd[5233]: Accepted publickey for core from 10.0.0.1 port 38840 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:57.937502 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:57.944696 systemd-logind[1416]: New session 16 of user core. Jul 2 00:03:57.953324 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:03:59.320927 sshd[5233]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:59.346506 systemd[1]: Started sshd@16-10.0.0.44:22-10.0.0.1:38856.service - OpenSSH per-connection server daemon (10.0.0.1:38856). Jul 2 00:03:59.348852 systemd[1]: sshd@15-10.0.0.44:22-10.0.0.1:38840.service: Deactivated successfully. Jul 2 00:03:59.353436 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:03:59.356941 systemd-logind[1416]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:03:59.360260 systemd-logind[1416]: Removed session 16. Jul 2 00:03:59.382750 sshd[5252]: Accepted publickey for core from 10.0.0.1 port 38856 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:59.384227 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:59.389184 systemd-logind[1416]: New session 17 of user core. Jul 2 00:03:59.403360 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:03:59.451329 containerd[1426]: time="2024-07-02T00:03:59.450661672Z" level=info msg="StopPodSandbox for \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\"" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.501 [WARNING][5275] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hwst2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"29823086-9a6f-43f9-9bc0-93ad25deb8fe", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028", Pod:"coredns-76f75df574-hwst2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4b91b9cd2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.502 [INFO][5275] k8s.go 608: Cleaning up netns ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.502 [INFO][5275] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" iface="eth0" netns="" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.502 [INFO][5275] k8s.go 615: Releasing IP address(es) ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.502 [INFO][5275] utils.go 188: Calico CNI releasing IP address ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.529 [INFO][5288] ipam_plugin.go 411: Releasing address using handleID ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.529 [INFO][5288] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.529 [INFO][5288] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.543 [WARNING][5288] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.543 [INFO][5288] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.545 [INFO][5288] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:59.549492 containerd[1426]: 2024-07-02 00:03:59.547 [INFO][5275] k8s.go 621: Teardown processing complete. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.550032 containerd[1426]: time="2024-07-02T00:03:59.549532878Z" level=info msg="TearDown network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\" successfully" Jul 2 00:03:59.550032 containerd[1426]: time="2024-07-02T00:03:59.549559478Z" level=info msg="StopPodSandbox for \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\" returns successfully" Jul 2 00:03:59.551281 containerd[1426]: time="2024-07-02T00:03:59.550712798Z" level=info msg="RemovePodSandbox for \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\"" Jul 2 00:03:59.552812 containerd[1426]: time="2024-07-02T00:03:59.550757918Z" level=info msg="Forcibly stopping sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\"" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.595 [WARNING][5310] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hwst2-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"29823086-9a6f-43f9-9bc0-93ad25deb8fe", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84a294ff33c1f65e327985d6b4ba8d3103cac4faf736dc26039c5387eaf37028", Pod:"coredns-76f75df574-hwst2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3e4b91b9cd2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.595 [INFO][5310] k8s.go 608: Cleaning up netns ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.595 [INFO][5310] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" iface="eth0" netns="" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.595 [INFO][5310] k8s.go 615: Releasing IP address(es) ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.595 [INFO][5310] utils.go 188: Calico CNI releasing IP address ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.626 [INFO][5318] ipam_plugin.go 411: Releasing address using handleID ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.626 [INFO][5318] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.626 [INFO][5318] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.637 [WARNING][5318] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.637 [INFO][5318] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" HandleID="k8s-pod-network.3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Workload="localhost-k8s-coredns--76f75df574--hwst2-eth0" Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.639 [INFO][5318] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:59.645103 containerd[1426]: 2024-07-02 00:03:59.643 [INFO][5310] k8s.go 621: Teardown processing complete. ContainerID="3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff" Jul 2 00:03:59.645511 containerd[1426]: time="2024-07-02T00:03:59.645073564Z" level=info msg="TearDown network for sandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\" successfully" Jul 2 00:03:59.653559 containerd[1426]: time="2024-07-02T00:03:59.653488004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:03:59.653711 containerd[1426]: time="2024-07-02T00:03:59.653594044Z" level=info msg="RemovePodSandbox \"3bb4407f8e01959e7d5682e1b8f953aac4fdf111ba75e341aa86209edfec25ff\" returns successfully" Jul 2 00:03:59.654272 containerd[1426]: time="2024-07-02T00:03:59.654240644Z" level=info msg="StopPodSandbox for \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\"" Jul 2 00:03:59.696594 sshd[5252]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:59.712087 systemd[1]: sshd@16-10.0.0.44:22-10.0.0.1:38856.service: Deactivated successfully. Jul 2 00:03:59.715820 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:03:59.717701 systemd-logind[1416]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:03:59.725555 systemd[1]: Started sshd@17-10.0.0.44:22-10.0.0.1:38862.service - OpenSSH per-connection server daemon (10.0.0.1:38862). Jul 2 00:03:59.728350 systemd-logind[1416]: Removed session 17. Jul 2 00:03:59.769368 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 38862 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:03:59.770075 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.721 [WARNING][5341] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q228g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deefda5b-5363-476d-b5c8-1f67ee1aea37", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66", Pod:"csi-node-driver-q228g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib22917b8c11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.722 [INFO][5341] k8s.go 608: Cleaning up netns ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.722 [INFO][5341] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" iface="eth0" netns="" Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.722 [INFO][5341] k8s.go 615: Releasing IP address(es) ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.722 [INFO][5341] utils.go 188: Calico CNI releasing IP address ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.750 [INFO][5353] ipam_plugin.go 411: Releasing address using handleID ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.750 [INFO][5353] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.750 [INFO][5353] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.764 [WARNING][5353] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.765 [INFO][5353] ipam_plugin.go 439: Releasing address using workloadID ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.766 [INFO][5353] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:59.771140 containerd[1426]: 2024-07-02 00:03:59.769 [INFO][5341] k8s.go 621: Teardown processing complete. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.771140 containerd[1426]: time="2024-07-02T00:03:59.771068571Z" level=info msg="TearDown network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\" successfully" Jul 2 00:03:59.771140 containerd[1426]: time="2024-07-02T00:03:59.771095851Z" level=info msg="StopPodSandbox for \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\" returns successfully" Jul 2 00:03:59.772497 containerd[1426]: time="2024-07-02T00:03:59.771631531Z" level=info msg="RemovePodSandbox for \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\"" Jul 2 00:03:59.772497 containerd[1426]: time="2024-07-02T00:03:59.771681331Z" level=info msg="Forcibly stopping sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\"" Jul 2 00:03:59.778665 systemd-logind[1416]: New session 18 of user core. Jul 2 00:03:59.785449 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.819 [WARNING][5378] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--q228g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deefda5b-5363-476d-b5c8-1f67ee1aea37", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f30b9c229f4c78deb9146f99bf49207d007bbb6ae24c8267c183ff221c8c9e66", Pod:"csi-node-driver-q228g", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib22917b8c11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.819 [INFO][5378] k8s.go 608: Cleaning up netns ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.819 [INFO][5378] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" iface="eth0" netns="" Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.819 [INFO][5378] k8s.go 615: Releasing IP address(es) ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.819 [INFO][5378] utils.go 188: Calico CNI releasing IP address ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.843 [INFO][5387] ipam_plugin.go 411: Releasing address using handleID ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.844 [INFO][5387] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.844 [INFO][5387] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.855 [WARNING][5387] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.855 [INFO][5387] ipam_plugin.go 439: Releasing address using workloadID ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" HandleID="k8s-pod-network.02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Workload="localhost-k8s-csi--node--driver--q228g-eth0" Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.857 [INFO][5387] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:59.864812 containerd[1426]: 2024-07-02 00:03:59.862 [INFO][5378] k8s.go 621: Teardown processing complete. ContainerID="02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3" Jul 2 00:03:59.864812 containerd[1426]: time="2024-07-02T00:03:59.864806937Z" level=info msg="TearDown network for sandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\" successfully" Jul 2 00:03:59.869015 containerd[1426]: time="2024-07-02T00:03:59.868969457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:03:59.869123 containerd[1426]: time="2024-07-02T00:03:59.869048777Z" level=info msg="RemovePodSandbox \"02f651a740dc2ea0111be18d87b0251beb68220a2a73897ae74f8276111ac1d3\" returns successfully" Jul 2 00:03:59.869542 containerd[1426]: time="2024-07-02T00:03:59.869503257Z" level=info msg="StopPodSandbox for \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\"" Jul 2 00:03:59.869642 containerd[1426]: time="2024-07-02T00:03:59.869593977Z" level=info msg="TearDown network for sandbox \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" successfully" Jul 2 00:03:59.869682 containerd[1426]: time="2024-07-02T00:03:59.869640937Z" level=info msg="StopPodSandbox for \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" returns successfully" Jul 2 00:03:59.870010 containerd[1426]: time="2024-07-02T00:03:59.869983017Z" level=info msg="RemovePodSandbox for \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\"" Jul 2 00:03:59.870052 containerd[1426]: time="2024-07-02T00:03:59.870012697Z" level=info msg="Forcibly stopping sandbox \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\"" Jul 2 00:03:59.870387 containerd[1426]: time="2024-07-02T00:03:59.870076697Z" level=info msg="TearDown network for sandbox \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" successfully" Jul 2 00:03:59.875809 containerd[1426]: time="2024-07-02T00:03:59.875768897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:03:59.875968 containerd[1426]: time="2024-07-02T00:03:59.875941658Z" level=info msg="RemovePodSandbox \"b69241a2590aad54d11032704f36360852adc01ceb8bee21244d6bb2849e816d\" returns successfully" Jul 2 00:03:59.876362 containerd[1426]: time="2024-07-02T00:03:59.876336538Z" level=info msg="StopPodSandbox for \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\"" Jul 2 00:03:59.935753 sshd[5352]: pam_unix(sshd:session): session closed for user core Jul 2 00:03:59.940918 systemd[1]: sshd@17-10.0.0.44:22-10.0.0.1:38862.service: Deactivated successfully. Jul 2 00:03:59.943503 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:03:59.945720 systemd-logind[1416]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:03:59.946738 systemd-logind[1416]: Removed session 18. Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.918 [WARNING][5418] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0", GenerateName:"calico-kube-controllers-7fcdbd9947-", Namespace:"calico-system", SelfLink:"", UID:"f4df34b6-4336-4a49-a4ba-110e2697cd8a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcdbd9947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a", Pod:"calico-kube-controllers-7fcdbd9947-zhk75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05f797c0181", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.918 [INFO][5418] k8s.go 608: Cleaning up netns ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.918 [INFO][5418] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" iface="eth0" netns="" Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.918 [INFO][5418] k8s.go 615: Releasing IP address(es) ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.918 [INFO][5418] utils.go 188: Calico CNI releasing IP address ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.947 [INFO][5426] ipam_plugin.go 411: Releasing address using handleID ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.947 [INFO][5426] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.947 [INFO][5426] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.958 [WARNING][5426] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.958 [INFO][5426] ipam_plugin.go 439: Releasing address using workloadID ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.960 [INFO][5426] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:03:59.964311 containerd[1426]: 2024-07-02 00:03:59.962 [INFO][5418] k8s.go 621: Teardown processing complete. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:03:59.964757 containerd[1426]: time="2024-07-02T00:03:59.964339783Z" level=info msg="TearDown network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\" successfully" Jul 2 00:03:59.964757 containerd[1426]: time="2024-07-02T00:03:59.964365543Z" level=info msg="StopPodSandbox for \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\" returns successfully" Jul 2 00:03:59.965334 containerd[1426]: time="2024-07-02T00:03:59.964959783Z" level=info msg="RemovePodSandbox for \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\"" Jul 2 00:03:59.965334 containerd[1426]: time="2024-07-02T00:03:59.965004343Z" level=info msg="Forcibly stopping sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\"" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.005 [WARNING][5450] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0", GenerateName:"calico-kube-controllers-7fcdbd9947-", Namespace:"calico-system", SelfLink:"", UID:"f4df34b6-4336-4a49-a4ba-110e2697cd8a", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcdbd9947", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cfb04c1ff57579d9042a0f2d2dd935e0e73192ef8b60799c713c44d0f85c177a", Pod:"calico-kube-controllers-7fcdbd9947-zhk75", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05f797c0181", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.005 [INFO][5450] k8s.go 608: Cleaning up netns ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.005 [INFO][5450] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" iface="eth0" netns="" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.005 [INFO][5450] k8s.go 615: Releasing IP address(es) ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.005 [INFO][5450] utils.go 188: Calico CNI releasing IP address ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.029 [INFO][5458] ipam_plugin.go 411: Releasing address using handleID ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.029 [INFO][5458] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.029 [INFO][5458] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.038 [WARNING][5458] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.038 [INFO][5458] ipam_plugin.go 439: Releasing address using workloadID ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" HandleID="k8s-pod-network.74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Workload="localhost-k8s-calico--kube--controllers--7fcdbd9947--zhk75-eth0" Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.040 [INFO][5458] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:04:00.044836 containerd[1426]: 2024-07-02 00:04:00.042 [INFO][5450] k8s.go 621: Teardown processing complete. ContainerID="74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37" Jul 2 00:04:00.045293 containerd[1426]: time="2024-07-02T00:04:00.044860588Z" level=info msg="TearDown network for sandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\" successfully" Jul 2 00:04:00.047674 containerd[1426]: time="2024-07-02T00:04:00.047615508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:04:00.047737 containerd[1426]: time="2024-07-02T00:04:00.047698388Z" level=info msg="RemovePodSandbox \"74fc208afc214d8620567a07344bf3ab16ff9ceedaa2718734bd844971641c37\" returns successfully" Jul 2 00:04:00.048317 containerd[1426]: time="2024-07-02T00:04:00.048285708Z" level=info msg="StopPodSandbox for \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\"" Jul 2 00:04:00.048423 containerd[1426]: time="2024-07-02T00:04:00.048368588Z" level=info msg="TearDown network for sandbox \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" successfully" Jul 2 00:04:00.048423 containerd[1426]: time="2024-07-02T00:04:00.048412148Z" level=info msg="StopPodSandbox for \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" returns successfully" Jul 2 00:04:00.048946 containerd[1426]: time="2024-07-02T00:04:00.048912188Z" level=info msg="RemovePodSandbox for \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\"" Jul 2 00:04:00.048987 containerd[1426]: time="2024-07-02T00:04:00.048940388Z" level=info msg="Forcibly stopping sandbox \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\"" Jul 2 00:04:00.049017 containerd[1426]: time="2024-07-02T00:04:00.049008308Z" level=info msg="TearDown network for sandbox \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" successfully" Jul 2 00:04:00.051626 containerd[1426]: time="2024-07-02T00:04:00.051581748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:04:00.051694 containerd[1426]: time="2024-07-02T00:04:00.051649908Z" level=info msg="RemovePodSandbox \"e952993d41804daa5dff4b407679b7cafe67e1943bd507eb1d89bfcf0506f244\" returns successfully" Jul 2 00:04:00.052024 containerd[1426]: time="2024-07-02T00:04:00.051992588Z" level=info msg="StopPodSandbox for \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\"" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.095 [WARNING][5480] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--md95h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"90c438d1-b083-497d-a344-5fac16fe8bda", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8", Pod:"coredns-76f75df574-md95h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3b19ed3c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.095 [INFO][5480] k8s.go 608: Cleaning up netns ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.095 [INFO][5480] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" iface="eth0" netns="" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.095 [INFO][5480] k8s.go 615: Releasing IP address(es) ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.095 [INFO][5480] utils.go 188: Calico CNI releasing IP address ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.120 [INFO][5488] ipam_plugin.go 411: Releasing address using handleID ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.120 [INFO][5488] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.120 [INFO][5488] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.129 [WARNING][5488] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.129 [INFO][5488] ipam_plugin.go 439: Releasing address using workloadID ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.131 [INFO][5488] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:04:00.135638 containerd[1426]: 2024-07-02 00:04:00.133 [INFO][5480] k8s.go 621: Teardown processing complete. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.136060 containerd[1426]: time="2024-07-02T00:04:00.135687793Z" level=info msg="TearDown network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\" successfully" Jul 2 00:04:00.136060 containerd[1426]: time="2024-07-02T00:04:00.135715553Z" level=info msg="StopPodSandbox for \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\" returns successfully" Jul 2 00:04:00.136214 containerd[1426]: time="2024-07-02T00:04:00.136190313Z" level=info msg="RemovePodSandbox for \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\"" Jul 2 00:04:00.136268 containerd[1426]: time="2024-07-02T00:04:00.136226993Z" level=info msg="Forcibly stopping sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\"" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.175 [WARNING][5510] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--md95h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"90c438d1-b083-497d-a344-5fac16fe8bda", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53c6625fa1d90bb4227ff1f5e922bf850118de5fbb48f884544f73f26625b3a8", Pod:"coredns-76f75df574-md95h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd3b19ed3c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.175 [INFO][5510] k8s.go 608: Cleaning up netns ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.175 [INFO][5510] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" iface="eth0" netns="" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.175 [INFO][5510] k8s.go 615: Releasing IP address(es) ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.175 [INFO][5510] utils.go 188: Calico CNI releasing IP address ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.195 [INFO][5518] ipam_plugin.go 411: Releasing address using handleID ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.195 [INFO][5518] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.195 [INFO][5518] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.206 [WARNING][5518] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.206 [INFO][5518] ipam_plugin.go 439: Releasing address using workloadID ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" HandleID="k8s-pod-network.461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Workload="localhost-k8s-coredns--76f75df574--md95h-eth0" Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.208 [INFO][5518] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:04:00.211869 containerd[1426]: 2024-07-02 00:04:00.209 [INFO][5510] k8s.go 621: Teardown processing complete. ContainerID="461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c" Jul 2 00:04:00.211869 containerd[1426]: time="2024-07-02T00:04:00.211833157Z" level=info msg="TearDown network for sandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\" successfully" Jul 2 00:04:00.216068 containerd[1426]: time="2024-07-02T00:04:00.216023677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:04:00.216166 containerd[1426]: time="2024-07-02T00:04:00.216095037Z" level=info msg="RemovePodSandbox \"461ac801cec51e98ee23da545549b49f2037ab17a7a24807059bdd4ecfee906c\" returns successfully" Jul 2 00:04:04.952086 systemd[1]: Started sshd@18-10.0.0.44:22-10.0.0.1:58196.service - OpenSSH per-connection server daemon (10.0.0.1:58196). Jul 2 00:04:04.988791 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 58196 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:04:04.990223 sshd[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:04:04.994279 systemd-logind[1416]: New session 19 of user core. Jul 2 00:04:05.000346 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:04:05.122962 sshd[5547]: pam_unix(sshd:session): session closed for user core Jul 2 00:04:05.126762 systemd[1]: sshd@18-10.0.0.44:22-10.0.0.1:58196.service: Deactivated successfully. Jul 2 00:04:05.129929 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:04:05.132195 systemd-logind[1416]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:04:05.133327 systemd-logind[1416]: Removed session 19. Jul 2 00:04:08.592770 kubelet[2517]: E0702 00:04:08.592721 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 00:04:08.607530 kubelet[2517]: I0702 00:04:08.606708 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-q228g" podStartSLOduration=44.810046611 podStartE2EDuration="48.606667254s" podCreationTimestamp="2024-07-02 00:03:20 +0000 UTC" firstStartedPulling="2024-07-02 00:03:48.843926792 +0000 UTC m=+49.503112256" lastFinishedPulling="2024-07-02 00:03:52.640547395 +0000 UTC m=+53.299732899" observedRunningTime="2024-07-02 00:03:53.667483528 +0000 UTC m=+54.326669072" watchObservedRunningTime="2024-07-02 00:04:08.606667254 +0000 UTC m=+69.265852718" Jul 2 00:04:10.134176 systemd[1]: Started sshd@19-10.0.0.44:22-10.0.0.1:58204.service - OpenSSH per-connection server daemon (10.0.0.1:58204). Jul 2 00:04:10.171967 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 58204 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:04:10.173344 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:04:10.177185 systemd-logind[1416]: New session 20 of user core. Jul 2 00:04:10.188432 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:04:10.316121 sshd[5598]: pam_unix(sshd:session): session closed for user core Jul 2 00:04:10.319899 systemd[1]: sshd@19-10.0.0.44:22-10.0.0.1:58204.service: Deactivated successfully. Jul 2 00:04:10.322078 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:04:10.324320 systemd-logind[1416]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:04:10.325489 systemd-logind[1416]: Removed session 20. Jul 2 00:04:15.333521 systemd[1]: Started sshd@20-10.0.0.44:22-10.0.0.1:34372.service - OpenSSH per-connection server daemon (10.0.0.1:34372). Jul 2 00:04:15.370717 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 34372 ssh2: RSA SHA256:Et/UiMXmFMbY2cyXsriYvaFlh38PhzkKrD1eNEeM82U Jul 2 00:04:15.372171 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:04:15.376251 systemd-logind[1416]: New session 21 of user core. Jul 2 00:04:15.388370 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:04:15.510089 sshd[5615]: pam_unix(sshd:session): session closed for user core Jul 2 00:04:15.514284 systemd[1]: sshd@20-10.0.0.44:22-10.0.0.1:34372.service: Deactivated successfully. Jul 2 00:04:15.516113 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:04:15.516925 systemd-logind[1416]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:04:15.519106 systemd-logind[1416]: Removed session 21.