Jul 2 09:07:58.873556 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 09:07:58.873576 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 09:07:58.873586 kernel: KASLR enabled Jul 2 09:07:58.873592 kernel: efi: EFI v2.7 by EDK II Jul 2 09:07:58.873598 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 09:07:58.873603 kernel: random: crng init done Jul 2 09:07:58.873610 kernel: ACPI: Early table checksum verification disabled Jul 2 09:07:58.873616 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 09:07:58.873622 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 09:07:58.873629 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873635 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873641 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873647 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873653 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873661 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873668 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873674 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873681 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 09:07:58.873687 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 09:07:58.873693 kernel: NUMA: Failed to initialise from firmware Jul 2 09:07:58.873700 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:07:58.873706 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 2 09:07:58.873712 kernel: Zone ranges: Jul 2 09:07:58.873719 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:07:58.873725 kernel: DMA32 empty Jul 2 09:07:58.873732 kernel: Normal empty Jul 2 09:07:58.873738 kernel: Movable zone start for each node Jul 2 09:07:58.873745 kernel: Early memory node ranges Jul 2 09:07:58.873751 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 09:07:58.873757 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 09:07:58.873763 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 09:07:58.873770 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 09:07:58.873776 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 09:07:58.873782 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 09:07:58.873788 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 09:07:58.873794 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 09:07:58.873801 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 09:07:58.873808 kernel: psci: probing for conduit method from ACPI. Jul 2 09:07:58.873814 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 09:07:58.873821 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 09:07:58.873830 kernel: psci: Trusted OS migration not required Jul 2 09:07:58.873836 kernel: psci: SMC Calling Convention v1.1 Jul 2 09:07:58.873843 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 09:07:58.873851 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 09:07:58.873858 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 09:07:58.873865 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 09:07:58.873871 kernel: Detected PIPT I-cache on CPU0 Jul 2 09:07:58.873878 kernel: CPU features: detected: GIC system register CPU interface Jul 2 09:07:58.873885 kernel: CPU features: detected: Hardware dirty bit management Jul 2 09:07:58.873891 kernel: CPU features: detected: Spectre-v4 Jul 2 09:07:58.873898 kernel: CPU features: detected: Spectre-BHB Jul 2 09:07:58.873905 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 09:07:58.873912 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 09:07:58.873920 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 09:07:58.873926 kernel: alternatives: applying boot alternatives Jul 2 09:07:58.873934 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:07:58.873941 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 09:07:58.873948 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 09:07:58.873955 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 09:07:58.873962 kernel: Fallback order for Node 0: 0 Jul 2 09:07:58.873968 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 09:07:58.873975 kernel: Policy zone: DMA Jul 2 09:07:58.873981 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 09:07:58.873988 kernel: software IO TLB: area num 4. Jul 2 09:07:58.873996 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 09:07:58.874003 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Jul 2 09:07:58.874010 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 09:07:58.874017 kernel: trace event string verifier disabled Jul 2 09:07:58.874023 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 09:07:58.874030 kernel: rcu: RCU event tracing is enabled. Jul 2 09:07:58.874037 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 09:07:58.874044 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 09:07:58.874062 kernel: Tracing variant of Tasks RCU enabled. Jul 2 09:07:58.874069 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 09:07:58.874076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 09:07:58.874083 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 09:07:58.874091 kernel: GICv3: 256 SPIs implemented Jul 2 09:07:58.874098 kernel: GICv3: 0 Extended SPIs implemented Jul 2 09:07:58.874104 kernel: Root IRQ handler: gic_handle_irq Jul 2 09:07:58.874111 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 09:07:58.874118 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 09:07:58.874125 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 09:07:58.874131 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 09:07:58.874138 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 09:07:58.874145 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 09:07:58.874152 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 09:07:58.874190 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 09:07:58.874199 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:07:58.874206 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 09:07:58.874213 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 09:07:58.874220 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 09:07:58.874226 kernel: arm-pv: using stolen time PV Jul 2 09:07:58.874233 kernel: Console: colour dummy device 80x25 Jul 2 09:07:58.874240 kernel: ACPI: Core revision 20230628 Jul 2 09:07:58.874247 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 09:07:58.874254 kernel: pid_max: default: 32768 minimum: 301 Jul 2 09:07:58.874261 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 09:07:58.874269 kernel: SELinux: Initializing. Jul 2 09:07:58.874276 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:07:58.874283 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 09:07:58.874290 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:07:58.874297 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 09:07:58.874304 kernel: rcu: Hierarchical SRCU implementation. Jul 2 09:07:58.874311 kernel: rcu: Max phase no-delay instances is 400. Jul 2 09:07:58.874318 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 09:07:58.874324 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 09:07:58.874332 kernel: Remapping and enabling EFI services. Jul 2 09:07:58.874339 kernel: smp: Bringing up secondary CPUs ... Jul 2 09:07:58.874346 kernel: Detected PIPT I-cache on CPU1 Jul 2 09:07:58.874353 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 09:07:58.874360 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 09:07:58.874367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:07:58.874374 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 09:07:58.874380 kernel: Detected PIPT I-cache on CPU2 Jul 2 09:07:58.874387 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 09:07:58.874394 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 09:07:58.874403 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:07:58.874409 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 09:07:58.874421 kernel: Detected PIPT I-cache on CPU3 Jul 2 09:07:58.874429 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 09:07:58.874436 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 09:07:58.874444 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 09:07:58.874451 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 09:07:58.874458 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 09:07:58.874465 kernel: SMP: Total of 4 processors activated. Jul 2 09:07:58.874474 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 09:07:58.874481 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 09:07:58.874495 kernel: CPU features: detected: Common not Private translations Jul 2 09:07:58.874503 kernel: CPU features: detected: CRC32 instructions Jul 2 09:07:58.874510 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 09:07:58.874517 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 09:07:58.874525 kernel: CPU features: detected: LSE atomic instructions Jul 2 09:07:58.874532 kernel: CPU features: detected: Privileged Access Never Jul 2 09:07:58.874541 kernel: CPU features: detected: RAS Extension Support Jul 2 09:07:58.874549 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 09:07:58.874556 kernel: CPU: All CPU(s) started at EL1 Jul 2 09:07:58.874563 kernel: alternatives: applying system-wide alternatives Jul 2 09:07:58.874570 kernel: devtmpfs: initialized Jul 2 09:07:58.874577 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 09:07:58.874585 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 09:07:58.874592 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 09:07:58.874599 kernel: SMBIOS 3.0.0 present. Jul 2 09:07:58.874608 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 09:07:58.874615 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 09:07:58.874623 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 09:07:58.874630 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 09:07:58.874637 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 09:07:58.874644 kernel: audit: initializing netlink subsys (disabled) Jul 2 09:07:58.874652 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Jul 2 09:07:58.874659 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 09:07:58.874666 kernel: cpuidle: using governor menu Jul 2 09:07:58.874675 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 09:07:58.874682 kernel: ASID allocator initialised with 32768 entries Jul 2 09:07:58.874689 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 09:07:58.874696 kernel: Serial: AMBA PL011 UART driver Jul 2 09:07:58.874704 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 09:07:58.874711 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 09:07:58.874718 kernel: Modules: 509120 pages in range for PLT usage Jul 2 09:07:58.874725 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 09:07:58.874732 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 09:07:58.874741 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 09:07:58.874749 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 09:07:58.874756 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 09:07:58.874763 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 09:07:58.874770 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 09:07:58.874777 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 09:07:58.874784 kernel: ACPI: Added _OSI(Module Device) Jul 2 09:07:58.874791 kernel: ACPI: Added _OSI(Processor Device) Jul 2 09:07:58.874799 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 09:07:58.874807 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 09:07:58.874814 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 09:07:58.874822 kernel: ACPI: Interpreter enabled Jul 2 09:07:58.874829 kernel: ACPI: Using GIC for interrupt routing Jul 2 09:07:58.874836 kernel: ACPI: MCFG table detected, 1 entries Jul 2 09:07:58.874843 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 09:07:58.874850 kernel: printk: console [ttyAMA0] enabled Jul 2 09:07:58.874858 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 09:07:58.874988 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 09:07:58.875075 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 09:07:58.875142 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 09:07:58.875205 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 09:07:58.875267 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 09:07:58.875277 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 09:07:58.875285 kernel: PCI host bridge to bus 0000:00 Jul 2 09:07:58.875351 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 09:07:58.875428 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 09:07:58.875546 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 09:07:58.875608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 09:07:58.875689 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 09:07:58.875768 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 09:07:58.875835 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 09:07:58.875904 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 09:07:58.875968 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:07:58.876033 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 09:07:58.876129 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 09:07:58.876194 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 09:07:58.876252 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 09:07:58.876309 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 09:07:58.876369 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 09:07:58.876379 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 09:07:58.876387 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 09:07:58.876394 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 09:07:58.876401 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 09:07:58.876408 kernel: iommu: Default domain type: Translated Jul 2 09:07:58.876416 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 09:07:58.876423 kernel: efivars: Registered efivars operations Jul 2 09:07:58.876430 kernel: vgaarb: loaded Jul 2 09:07:58.876439 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 09:07:58.876447 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 09:07:58.876454 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 09:07:58.876461 kernel: pnp: PnP ACPI init Jul 2 09:07:58.876541 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 09:07:58.876553 kernel: pnp: PnP ACPI: found 1 devices Jul 2 09:07:58.876560 kernel: NET: Registered PF_INET protocol family Jul 2 09:07:58.876568 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 09:07:58.876577 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 09:07:58.876585 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 09:07:58.876592 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 09:07:58.876599 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 09:07:58.876607 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 09:07:58.876614 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:07:58.876621 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 09:07:58.876629 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 09:07:58.876636 kernel: PCI: CLS 0 bytes, default 64 Jul 2 09:07:58.876645 kernel: kvm [1]: HYP mode not available Jul 2 09:07:58.876652 kernel: Initialise system trusted keyrings Jul 2 09:07:58.876659 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 09:07:58.876667 kernel: Key type asymmetric registered Jul 2 09:07:58.876674 kernel: Asymmetric key parser 'x509' registered Jul 2 09:07:58.876681 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 09:07:58.876689 kernel: io scheduler mq-deadline registered Jul 2 09:07:58.876696 kernel: io scheduler kyber registered Jul 2 09:07:58.876703 kernel: io scheduler bfq registered Jul 2 09:07:58.876711 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 09:07:58.876719 kernel: ACPI: button: Power Button [PWRB] Jul 2 09:07:58.876727 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 09:07:58.876793 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 09:07:58.876804 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 09:07:58.876811 kernel: thunder_xcv, ver 1.0 Jul 2 09:07:58.876818 kernel: thunder_bgx, ver 1.0 Jul 2 09:07:58.876825 kernel: nicpf, ver 1.0 Jul 2 09:07:58.876833 kernel: nicvf, ver 1.0 Jul 2 09:07:58.876905 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 09:07:58.876967 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T09:07:58 UTC (1719911278) Jul 2 09:07:58.876977 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 09:07:58.876984 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 09:07:58.876992 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 09:07:58.876999 kernel: watchdog: Hard watchdog permanently disabled Jul 2 09:07:58.877006 kernel: NET: Registered PF_INET6 protocol family Jul 2 09:07:58.877013 kernel: Segment Routing with IPv6 Jul 2 09:07:58.877022 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 09:07:58.877030 kernel: NET: Registered PF_PACKET protocol family Jul 2 09:07:58.877037 kernel: Key type dns_resolver registered Jul 2 09:07:58.877044 kernel: registered taskstats version 1 Jul 2 09:07:58.877062 kernel: Loading compiled-in X.509 certificates Jul 2 09:07:58.877070 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 09:07:58.877077 kernel: Key type .fscrypt registered Jul 2 09:07:58.877085 kernel: Key type fscrypt-provisioning registered Jul 2 09:07:58.877092 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 09:07:58.877101 kernel: ima: Allocated hash algorithm: sha1 Jul 2 09:07:58.877108 kernel: ima: No architecture policies found Jul 2 09:07:58.877116 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 09:07:58.877123 kernel: clk: Disabling unused clocks Jul 2 09:07:58.877130 kernel: Freeing unused kernel memory: 39040K Jul 2 09:07:58.877137 kernel: Run /init as init process Jul 2 09:07:58.877144 kernel: with arguments: Jul 2 09:07:58.877151 kernel: /init Jul 2 09:07:58.877158 kernel: with environment: Jul 2 09:07:58.877167 kernel: HOME=/ Jul 2 09:07:58.877174 kernel: TERM=linux Jul 2 09:07:58.877182 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 09:07:58.877190 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:07:58.877200 systemd[1]: Detected virtualization kvm. Jul 2 09:07:58.877208 systemd[1]: Detected architecture arm64. Jul 2 09:07:58.877215 systemd[1]: Running in initrd. Jul 2 09:07:58.877223 systemd[1]: No hostname configured, using default hostname. Jul 2 09:07:58.877232 systemd[1]: Hostname set to . Jul 2 09:07:58.877240 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:07:58.877247 systemd[1]: Queued start job for default target initrd.target. Jul 2 09:07:58.877255 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:07:58.877263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:07:58.877271 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 09:07:58.877279 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:07:58.877288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 09:07:58.877296 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 09:07:58.877306 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 09:07:58.877314 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 09:07:58.877321 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:07:58.877329 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:07:58.877337 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:07:58.877346 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:07:58.877354 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:07:58.877362 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:07:58.877369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:07:58.877377 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:07:58.877385 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 09:07:58.877393 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 09:07:58.877401 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:07:58.877408 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:07:58.877417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:07:58.877425 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:07:58.877433 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 09:07:58.877441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:07:58.877449 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 09:07:58.877457 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 09:07:58.877464 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:07:58.877472 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:07:58.877480 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:07:58.877495 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 09:07:58.877503 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:07:58.877511 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 09:07:58.877519 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 09:07:58.877545 systemd-journald[238]: Collecting audit messages is disabled. Jul 2 09:07:58.877564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:07:58.877573 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 09:07:58.877581 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:07:58.877591 systemd-journald[238]: Journal started Jul 2 09:07:58.877609 systemd-journald[238]: Runtime Journal (/run/log/journal/ebee371501b1411b8c39a897b4ada27b) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:07:58.869333 systemd-modules-load[239]: Inserted module 'overlay' Jul 2 09:07:58.883073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:07:58.883107 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 09:07:58.884081 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:07:58.888054 kernel: Bridge firewalling registered Jul 2 09:07:58.885997 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 2 09:07:58.886989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:07:58.890325 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:07:58.898189 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:07:58.899429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:07:58.900745 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:07:58.903907 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 09:07:58.909832 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:07:58.910881 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:07:58.915203 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:07:58.918348 dracut-cmdline[271]: dracut-dracut-053 Jul 2 09:07:58.920776 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=339cf548fbb7b0074109371a653774e9fabae27ff3a90e4c67dbbb2f78376930 Jul 2 09:07:58.947884 systemd-resolved[282]: Positive Trust Anchors: Jul 2 09:07:58.948786 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:07:58.948818 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:07:58.953358 systemd-resolved[282]: Defaulting to hostname 'linux'. Jul 2 09:07:58.956718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:07:58.957861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:07:58.986083 kernel: SCSI subsystem initialized Jul 2 09:07:58.991074 kernel: Loading iSCSI transport class v2.0-870. Jul 2 09:07:58.999078 kernel: iscsi: registered transport (tcp) Jul 2 09:07:59.012076 kernel: iscsi: registered transport (qla4xxx) Jul 2 09:07:59.012090 kernel: QLogic iSCSI HBA Driver Jul 2 09:07:59.056090 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 09:07:59.066220 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 09:07:59.082076 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 09:07:59.082118 kernel: device-mapper: uevent: version 1.0.3 Jul 2 09:07:59.083074 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 09:07:59.130089 kernel: raid6: neonx8 gen() 15684 MB/s Jul 2 09:07:59.147073 kernel: raid6: neonx4 gen() 15637 MB/s Jul 2 09:07:59.164073 kernel: raid6: neonx2 gen() 13214 MB/s Jul 2 09:07:59.181074 kernel: raid6: neonx1 gen() 10453 MB/s Jul 2 09:07:59.198065 kernel: raid6: int64x8 gen() 6142 MB/s Jul 2 09:07:59.215064 kernel: raid6: int64x4 gen() 7333 MB/s Jul 2 09:07:59.232071 kernel: raid6: int64x2 gen() 6123 MB/s Jul 2 09:07:59.249064 kernel: raid6: int64x1 gen() 5050 MB/s Jul 2 09:07:59.249083 kernel: raid6: using algorithm neonx8 gen() 15684 MB/s Jul 2 09:07:59.266076 kernel: raid6: .... xor() 11910 MB/s, rmw enabled Jul 2 09:07:59.266088 kernel: raid6: using neon recovery algorithm Jul 2 09:07:59.271397 kernel: xor: measuring software checksum speed Jul 2 09:07:59.271413 kernel: 8regs : 19859 MB/sec Jul 2 09:07:59.272272 kernel: 32regs : 19697 MB/sec Jul 2 09:07:59.273442 kernel: arm64_neon : 27197 MB/sec Jul 2 09:07:59.273459 kernel: xor: using function: arm64_neon (27197 MB/sec) Jul 2 09:07:59.326085 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 09:07:59.336116 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:07:59.346217 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:07:59.356710 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 2 09:07:59.359775 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:07:59.362026 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 09:07:59.376719 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jul 2 09:07:59.402479 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:07:59.417232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:07:59.456404 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:07:59.463675 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 09:07:59.478084 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 09:07:59.479304 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:07:59.480481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:07:59.481941 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:07:59.493165 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 09:07:59.500156 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:07:59.504253 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 09:07:59.513194 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 09:07:59.513304 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 09:07:59.513315 kernel: GPT:9289727 != 19775487 Jul 2 09:07:59.513324 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 09:07:59.513334 kernel: GPT:9289727 != 19775487 Jul 2 09:07:59.513342 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 09:07:59.513354 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:07:59.512347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:07:59.512447 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:07:59.514773 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:07:59.517539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:07:59.517695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:07:59.520183 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:07:59.529069 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (518) Jul 2 09:07:59.531076 kernel: BTRFS: device fsid ad4b0605-c88d-4cc1-aa96-32e9393058b1 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (514) Jul 2 09:07:59.533405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:07:59.546974 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 09:07:59.548404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:07:59.557206 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 09:07:59.561881 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:07:59.565858 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 09:07:59.567080 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 09:07:59.581221 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 09:07:59.583099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 09:07:59.588775 disk-uuid[549]: Primary Header is updated. Jul 2 09:07:59.588775 disk-uuid[549]: Secondary Entries is updated. Jul 2 09:07:59.588775 disk-uuid[549]: Secondary Header is updated. Jul 2 09:07:59.592339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:07:59.602317 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:08:00.607106 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 09:08:00.608290 disk-uuid[552]: The operation has completed successfully. Jul 2 09:08:00.628154 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 09:08:00.628253 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 09:08:00.647207 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 09:08:00.650984 sh[571]: Success Jul 2 09:08:00.665096 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 09:08:00.701450 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 09:08:00.702984 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 09:08:00.703870 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 09:08:00.714674 kernel: BTRFS info (device dm-0): first mount of filesystem ad4b0605-c88d-4cc1-aa96-32e9393058b1 Jul 2 09:08:00.714712 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:08:00.714730 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 09:08:00.714740 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 09:08:00.716063 kernel: BTRFS info (device dm-0): using free space tree Jul 2 09:08:00.718698 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 09:08:00.719810 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 09:08:00.728220 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 09:08:00.729495 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 09:08:00.737341 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:08:00.737379 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:08:00.737390 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:08:00.739065 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:08:00.746623 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 09:08:00.748078 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:08:00.753708 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 09:08:00.762210 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 09:08:00.817113 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:08:00.826198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:08:00.850961 systemd-networkd[764]: lo: Link UP Jul 2 09:08:00.850972 systemd-networkd[764]: lo: Gained carrier Jul 2 09:08:00.851730 systemd-networkd[764]: Enumeration completed Jul 2 09:08:00.852403 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:08:00.852681 ignition[667]: Ignition 2.18.0 Jul 2 09:08:00.853470 systemd[1]: Reached target network.target - Network. Jul 2 09:08:00.852687 ignition[667]: Stage: fetch-offline Jul 2 09:08:00.853827 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:08:00.852716 ignition[667]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:08:00.853831 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:08:00.852723 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:08:00.854581 systemd-networkd[764]: eth0: Link UP Jul 2 09:08:00.852802 ignition[667]: parsed url from cmdline: "" Jul 2 09:08:00.854586 systemd-networkd[764]: eth0: Gained carrier Jul 2 09:08:00.852805 ignition[667]: no config URL provided Jul 2 09:08:00.854593 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:08:00.852809 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 09:08:00.852815 ignition[667]: no config at "/usr/lib/ignition/user.ign" Jul 2 09:08:00.852837 ignition[667]: op(1): [started] loading QEMU firmware config module Jul 2 09:08:00.852841 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 09:08:00.873195 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:08:00.868110 ignition[667]: op(1): [finished] loading QEMU firmware config module Jul 2 09:08:00.910272 ignition[667]: parsing config with SHA512: a7ef6ab850f38ee646e0ab911812f9625f00efa867beba15b7640c5be606756d566ee596c1c3110e42ec4c3fac1e504b989f48498a094b0a527747a7333116de Jul 2 09:08:00.914478 unknown[667]: fetched base config from "system" Jul 2 09:08:00.914492 unknown[667]: fetched user config from "qemu" Jul 2 09:08:00.914918 ignition[667]: fetch-offline: fetch-offline passed Jul 2 09:08:00.916925 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:08:00.914974 ignition[667]: Ignition finished successfully Jul 2 09:08:00.917958 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 09:08:00.924221 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 09:08:00.934493 ignition[772]: Ignition 2.18.0 Jul 2 09:08:00.934524 ignition[772]: Stage: kargs Jul 2 09:08:00.934691 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:08:00.934701 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:08:00.935532 ignition[772]: kargs: kargs passed Jul 2 09:08:00.935575 ignition[772]: Ignition finished successfully Jul 2 09:08:00.939639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 09:08:00.955272 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 09:08:00.964705 ignition[781]: Ignition 2.18.0 Jul 2 09:08:00.964714 ignition[781]: Stage: disks Jul 2 09:08:00.964850 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 2 09:08:00.967372 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 09:08:00.964858 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:08:00.968683 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 09:08:00.965651 ignition[781]: disks: disks passed Jul 2 09:08:00.970042 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 09:08:00.965693 ignition[781]: Ignition finished successfully Jul 2 09:08:00.971831 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:08:00.973440 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:08:00.974687 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:08:00.982232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 09:08:00.993150 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 09:08:00.996818 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 09:08:01.000299 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 09:08:01.044939 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 09:08:01.046093 kernel: EXT4-fs (vda9): mounted filesystem c1692a6b-74d8-4bda-be0c-9d706985f1ed r/w with ordered data mode. Quota mode: none. Jul 2 09:08:01.045990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 09:08:01.062165 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:08:01.063608 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 09:08:01.064593 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 09:08:01.064663 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 09:08:01.064707 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:08:01.072084 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jul 2 09:08:01.072108 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:08:01.072119 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:08:01.072129 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:08:01.070672 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 09:08:01.073997 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 09:08:01.076066 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:08:01.077624 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:08:01.118637 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 09:08:01.122760 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jul 2 09:08:01.126592 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 09:08:01.130373 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 09:08:01.198765 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 09:08:01.210169 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 09:08:01.212406 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 09:08:01.217063 kernel: BTRFS info (device vda6): last unmount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:08:01.232080 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 09:08:01.235113 ignition[914]: INFO : Ignition 2.18.0 Jul 2 09:08:01.236127 ignition[914]: INFO : Stage: mount Jul 2 09:08:01.236127 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:08:01.236127 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:08:01.238011 ignition[914]: INFO : mount: mount passed Jul 2 09:08:01.238011 ignition[914]: INFO : Ignition finished successfully Jul 2 09:08:01.238969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 09:08:01.246161 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 09:08:01.713322 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 09:08:01.723234 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 09:08:01.729114 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (928) Jul 2 09:08:01.729142 kernel: BTRFS info (device vda6): first mount of filesystem d4c1a64e-1f65-4195-ac94-8abb45f4a96e Jul 2 09:08:01.729153 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 09:08:01.730256 kernel: BTRFS info (device vda6): using free space tree Jul 2 09:08:01.732069 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 09:08:01.733226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 09:08:01.748792 ignition[945]: INFO : Ignition 2.18.0 Jul 2 09:08:01.748792 ignition[945]: INFO : Stage: files Jul 2 09:08:01.749965 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:08:01.749965 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:08:01.749965 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jul 2 09:08:01.752535 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 09:08:01.752535 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 09:08:01.752535 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 09:08:01.755517 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 09:08:01.755517 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 09:08:01.755517 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:08:01.755517 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 09:08:01.752963 unknown[945]: wrote ssh authorized keys file for user: core Jul 2 09:08:01.790582 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 09:08:01.826285 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:08:01.827830 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Jul 2 09:08:02.138433 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 09:08:02.263268 systemd-networkd[764]: eth0: Gained IPv6LL Jul 2 09:08:02.371682 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Jul 2 09:08:02.371682 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 09:08:02.375289 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 09:08:02.395232 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:08:02.398673 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 09:08:02.400169 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 09:08:02.400169 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 09:08:02.400169 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 09:08:02.400169 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:08:02.400169 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 09:08:02.400169 ignition[945]: INFO : files: files passed Jul 2 09:08:02.400169 ignition[945]: INFO : Ignition finished successfully Jul 2 09:08:02.402143 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 09:08:02.411214 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 09:08:02.413333 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 09:08:02.416122 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 09:08:02.416220 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 09:08:02.419941 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 09:08:02.423270 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:08:02.423270 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:08:02.426179 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 09:08:02.425689 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:08:02.427450 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 09:08:02.442188 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 09:08:02.461486 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 09:08:02.461635 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 09:08:02.463807 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 09:08:02.464886 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 09:08:02.466911 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 09:08:02.467667 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 09:08:02.483589 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:08:02.491203 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 09:08:02.498598 systemd[1]: Stopped target network.target - Network. Jul 2 09:08:02.499561 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:08:02.501254 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:08:02.503228 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 09:08:02.504983 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 09:08:02.505116 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 09:08:02.507651 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 09:08:02.508721 systemd[1]: Stopped target basic.target - Basic System. Jul 2 09:08:02.510589 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 09:08:02.512381 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 09:08:02.514167 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 09:08:02.516165 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 09:08:02.518116 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 09:08:02.520186 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 09:08:02.521995 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 09:08:02.523989 systemd[1]: Stopped target swap.target - Swaps. Jul 2 09:08:02.525592 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 09:08:02.525710 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 09:08:02.528047 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:08:02.529981 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:08:02.531926 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 09:08:02.535114 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:08:02.536373 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 09:08:02.536490 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 09:08:02.539277 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 09:08:02.539393 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 09:08:02.541397 systemd[1]: Stopped target paths.target - Path Units. Jul 2 09:08:02.542971 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 09:08:02.546112 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:08:02.547402 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 09:08:02.549490 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 09:08:02.551037 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 09:08:02.551145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 09:08:02.552722 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 09:08:02.552800 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 09:08:02.554426 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 09:08:02.554540 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 09:08:02.556290 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 09:08:02.556386 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 09:08:02.564261 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 09:08:02.565142 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 09:08:02.565271 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:08:02.569788 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 09:08:02.570856 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 09:08:02.574441 ignition[999]: INFO : Ignition 2.18.0 Jul 2 09:08:02.574441 ignition[999]: INFO : Stage: umount Jul 2 09:08:02.579097 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 09:08:02.579097 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 09:08:02.579097 ignition[999]: INFO : umount: umount passed Jul 2 09:08:02.579097 ignition[999]: INFO : Ignition finished successfully Jul 2 09:08:02.574943 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 09:08:02.575906 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 09:08:02.576043 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:08:02.577336 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 09:08:02.577433 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 09:08:02.578114 systemd-networkd[764]: eth0: DHCPv6 lease lost Jul 2 09:08:02.585524 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 09:08:02.585624 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 09:08:02.587486 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 09:08:02.587609 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 09:08:02.590327 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 09:08:02.590461 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 09:08:02.592979 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 09:08:02.594101 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 09:08:02.595982 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 09:08:02.598469 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 09:08:02.598503 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:08:02.600086 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 09:08:02.600136 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 09:08:02.601913 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 09:08:02.601957 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 09:08:02.603402 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 09:08:02.603440 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 09:08:02.604833 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 09:08:02.604874 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 09:08:02.616165 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 09:08:02.616833 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 09:08:02.616889 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 09:08:02.618646 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 09:08:02.618689 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:08:02.620312 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 09:08:02.620354 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 09:08:02.622149 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 09:08:02.622189 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:08:02.623808 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:08:02.633438 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 09:08:02.633557 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 09:08:02.643820 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 09:08:02.643964 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:08:02.645719 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 09:08:02.645758 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 09:08:02.646974 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 09:08:02.647003 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:08:02.648339 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 09:08:02.648384 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 09:08:02.650354 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 09:08:02.650395 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 09:08:02.652280 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 09:08:02.652324 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 09:08:02.665214 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 09:08:02.665973 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 09:08:02.666022 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:08:02.667588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 09:08:02.667625 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:08:02.669269 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 09:08:02.669370 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 09:08:02.670581 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 09:08:02.670667 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 09:08:02.672587 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 09:08:02.673342 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 09:08:02.673401 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 09:08:02.675511 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 09:08:02.685910 systemd[1]: Switching root. Jul 2 09:08:02.711926 systemd-journald[238]: Journal stopped Jul 2 09:08:03.415562 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 2 09:08:03.415619 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 09:08:03.415632 kernel: SELinux: policy capability open_perms=1 Jul 2 09:08:03.415642 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 09:08:03.415656 kernel: SELinux: policy capability always_check_network=0 Jul 2 09:08:03.415666 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 09:08:03.415676 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 09:08:03.415688 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 09:08:03.415698 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 09:08:03.415707 kernel: audit: type=1403 audit(1719911282.862:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 09:08:03.415718 systemd[1]: Successfully loaded SELinux policy in 29.773ms. Jul 2 09:08:03.415735 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.006ms. Jul 2 09:08:03.415747 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 09:08:03.415758 systemd[1]: Detected virtualization kvm. Jul 2 09:08:03.415768 systemd[1]: Detected architecture arm64. Jul 2 09:08:03.415781 systemd[1]: Detected first boot. Jul 2 09:08:03.415792 systemd[1]: Initializing machine ID from VM UUID. Jul 2 09:08:03.415803 zram_generator::config[1045]: No configuration found. Jul 2 09:08:03.415814 systemd[1]: Populated /etc with preset unit settings. Jul 2 09:08:03.415824 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 09:08:03.415837 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 09:08:03.415848 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 09:08:03.415858 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 09:08:03.415869 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 09:08:03.415882 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 09:08:03.415893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 09:08:03.415904 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 09:08:03.415914 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 09:08:03.415925 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 09:08:03.415935 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 09:08:03.415946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 09:08:03.415957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 09:08:03.415970 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 09:08:03.415981 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 09:08:03.415991 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 09:08:03.416002 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 09:08:03.416012 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 09:08:03.416023 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 09:08:03.416034 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 09:08:03.416044 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 09:08:03.416234 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 09:08:03.416259 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 09:08:03.416270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 09:08:03.416281 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 09:08:03.416291 systemd[1]: Reached target slices.target - Slice Units. Jul 2 09:08:03.416302 systemd[1]: Reached target swap.target - Swaps. Jul 2 09:08:03.416312 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 09:08:03.416322 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 09:08:03.416337 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 09:08:03.416350 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 09:08:03.416361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 09:08:03.416372 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 09:08:03.416383 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 09:08:03.416393 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 09:08:03.416403 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 09:08:03.416414 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 09:08:03.416424 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 09:08:03.416434 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 09:08:03.416447 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 09:08:03.416458 systemd[1]: Reached target machines.target - Containers. Jul 2 09:08:03.416468 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 09:08:03.416478 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:08:03.416490 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 09:08:03.416500 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 09:08:03.416510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:08:03.416527 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:08:03.416541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:08:03.416551 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 09:08:03.416561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:08:03.416572 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 09:08:03.416582 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 09:08:03.416593 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 09:08:03.416603 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 09:08:03.416612 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 09:08:03.416624 kernel: fuse: init (API version 7.39) Jul 2 09:08:03.416636 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 09:08:03.416647 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 09:08:03.416656 kernel: ACPI: bus type drm_connector registered Jul 2 09:08:03.416667 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 09:08:03.416676 kernel: loop: module loaded Jul 2 09:08:03.416686 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 09:08:03.416696 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 09:08:03.416726 systemd-journald[1109]: Collecting audit messages is disabled. Jul 2 09:08:03.416768 systemd-journald[1109]: Journal started Jul 2 09:08:03.416790 systemd-journald[1109]: Runtime Journal (/run/log/journal/ebee371501b1411b8c39a897b4ada27b) is 5.9M, max 47.3M, 41.4M free. Jul 2 09:08:03.227863 systemd[1]: Queued start job for default target multi-user.target. Jul 2 09:08:03.248441 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 09:08:03.248853 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 09:08:03.419462 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 09:08:03.419496 systemd[1]: Stopped verity-setup.service. Jul 2 09:08:03.423348 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 09:08:03.423975 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 09:08:03.425186 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 09:08:03.426465 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 09:08:03.427648 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 09:08:03.428899 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 09:08:03.430175 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 09:08:03.432159 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 09:08:03.433325 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 09:08:03.434532 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 09:08:03.434691 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 09:08:03.435825 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:08:03.435964 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:08:03.437126 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:08:03.437264 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:08:03.438506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:08:03.438663 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:08:03.439805 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 09:08:03.439945 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 09:08:03.441156 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:08:03.441284 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:08:03.442326 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 09:08:03.443452 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 09:08:03.444796 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 09:08:03.456722 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 09:08:03.469187 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 09:08:03.471264 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 09:08:03.472115 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 09:08:03.472155 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 09:08:03.473906 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 09:08:03.475854 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 09:08:03.477736 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 09:08:03.478614 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:08:03.481252 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 09:08:03.482947 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 09:08:03.483836 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:08:03.487209 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 09:08:03.490067 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:08:03.493307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 09:08:03.495490 systemd-journald[1109]: Time spent on flushing to /var/log/journal/ebee371501b1411b8c39a897b4ada27b is 19.935ms for 851 entries. Jul 2 09:08:03.495490 systemd-journald[1109]: System Journal (/var/log/journal/ebee371501b1411b8c39a897b4ada27b) is 8.0M, max 195.6M, 187.6M free. Jul 2 09:08:03.536282 systemd-journald[1109]: Received client request to flush runtime journal. Jul 2 09:08:03.536328 kernel: loop0: detected capacity change from 0 to 113672 Jul 2 09:08:03.536342 kernel: block loop0: the capability attribute has been deprecated. Jul 2 09:08:03.496492 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 09:08:03.500363 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 09:08:03.503796 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 09:08:03.504899 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 09:08:03.506070 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 09:08:03.507165 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 09:08:03.508312 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 09:08:03.513124 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 09:08:03.521287 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 09:08:03.523280 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 09:08:03.542642 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 09:08:03.543098 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 09:08:03.544993 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 09:08:03.549963 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 09:08:03.558848 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 09:08:03.560170 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 09:08:03.566885 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 09:08:03.583315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 09:08:03.586263 kernel: loop1: detected capacity change from 0 to 194512 Jul 2 09:08:03.605121 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 2 09:08:03.605136 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Jul 2 09:08:03.610300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 09:08:03.623078 kernel: loop2: detected capacity change from 0 to 59672 Jul 2 09:08:03.662085 kernel: loop3: detected capacity change from 0 to 113672 Jul 2 09:08:03.667073 kernel: loop4: detected capacity change from 0 to 194512 Jul 2 09:08:03.674084 kernel: loop5: detected capacity change from 0 to 59672 Jul 2 09:08:03.677448 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 09:08:03.677923 (sd-merge)[1179]: Merged extensions into '/usr'. Jul 2 09:08:03.683329 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 09:08:03.683344 systemd[1]: Reloading... Jul 2 09:08:03.732081 zram_generator::config[1201]: No configuration found. Jul 2 09:08:03.804643 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 09:08:03.848550 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:08:03.887446 systemd[1]: Reloading finished in 203 ms. Jul 2 09:08:03.918462 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 09:08:03.919722 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 09:08:03.935544 systemd[1]: Starting ensure-sysext.service... Jul 2 09:08:03.937282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 09:08:03.950399 systemd[1]: Reloading requested from client PID 1238 ('systemctl') (unit ensure-sysext.service)... Jul 2 09:08:03.950418 systemd[1]: Reloading... Jul 2 09:08:03.957003 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 09:08:03.957393 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 09:08:03.958023 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 09:08:03.958251 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 2 09:08:03.958302 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Jul 2 09:08:03.960478 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:08:03.960490 systemd-tmpfiles[1240]: Skipping /boot Jul 2 09:08:03.966967 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 09:08:03.966987 systemd-tmpfiles[1240]: Skipping /boot Jul 2 09:08:03.995135 zram_generator::config[1262]: No configuration found. Jul 2 09:08:04.075864 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:08:04.113898 systemd[1]: Reloading finished in 163 ms. Jul 2 09:08:04.129045 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 09:08:04.148518 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 09:08:04.154295 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:08:04.156600 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 09:08:04.158518 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 09:08:04.162380 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 09:08:04.168636 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 09:08:04.171043 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 09:08:04.175928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:08:04.178383 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:08:04.183872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:08:04.193240 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:08:04.194160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:08:04.195284 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 09:08:04.197176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:08:04.199091 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:08:04.200283 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:08:04.200430 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:08:04.202182 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:08:04.202308 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:08:04.210608 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:08:04.218339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:08:04.219909 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Jul 2 09:08:04.220505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:08:04.223202 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:08:04.224153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:08:04.226224 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 09:08:04.231595 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 09:08:04.233818 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 09:08:04.235509 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:08:04.237155 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:08:04.238574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:08:04.238751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:08:04.240449 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:08:04.240596 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:08:04.246097 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 09:08:04.254323 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 09:08:04.256023 augenrules[1338]: No rules Jul 2 09:08:04.263416 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 09:08:04.265736 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 09:08:04.273916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 09:08:04.276935 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 09:08:04.278935 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 09:08:04.279658 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 09:08:04.281254 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 09:08:04.283861 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:08:04.286585 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 09:08:04.287958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 09:08:04.288973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 09:08:04.290907 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 09:08:04.291029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 09:08:04.292997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 09:08:04.294124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 09:08:04.295517 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 09:08:04.295649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 09:08:04.300921 systemd[1]: Finished ensure-sysext.service. Jul 2 09:08:04.320786 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1363) Jul 2 09:08:04.320878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1365) Jul 2 09:08:04.317915 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 09:08:04.337303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 09:08:04.339392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 09:08:04.339469 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 09:08:04.342790 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 09:08:04.347205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 09:08:04.362786 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 09:08:04.367269 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 09:08:04.383832 systemd-resolved[1306]: Positive Trust Anchors: Jul 2 09:08:04.383843 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 09:08:04.383874 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 09:08:04.394239 systemd-resolved[1306]: Defaulting to hostname 'linux'. Jul 2 09:08:04.396698 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 09:08:04.397746 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 09:08:04.416012 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 09:08:04.421505 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 09:08:04.424899 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 09:08:04.426503 systemd-networkd[1381]: lo: Link UP Jul 2 09:08:04.426759 systemd-networkd[1381]: lo: Gained carrier Jul 2 09:08:04.427641 systemd-networkd[1381]: Enumeration completed Jul 2 09:08:04.437324 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 09:08:04.438265 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 09:08:04.439459 systemd[1]: Reached target network.target - Network. Jul 2 09:08:04.441478 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 09:08:04.449549 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 09:08:04.449818 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:08:04.449916 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 09:08:04.450830 systemd-networkd[1381]: eth0: Link UP Jul 2 09:08:04.450906 systemd-networkd[1381]: eth0: Gained carrier Jul 2 09:08:04.450999 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 09:08:04.455146 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 09:08:04.474243 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 09:08:04.475825 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Jul 2 09:08:04.477187 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 09:08:04.477320 systemd-timesyncd[1384]: Initial clock synchronization to Tue 2024-07-02 09:08:04.711304 UTC. Jul 2 09:08:04.485986 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:08:04.493126 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 09:08:04.526143 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 09:08:04.527383 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 09:08:04.528337 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 09:08:04.529226 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 09:08:04.530265 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 09:08:04.531400 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 09:08:04.532737 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 09:08:04.533756 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 09:08:04.534741 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 09:08:04.534776 systemd[1]: Reached target paths.target - Path Units. Jul 2 09:08:04.535538 systemd[1]: Reached target timers.target - Timer Units. Jul 2 09:08:04.536982 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 09:08:04.539120 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 09:08:04.548072 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 09:08:04.550150 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 09:08:04.551480 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 09:08:04.552475 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 09:08:04.553252 systemd[1]: Reached target basic.target - Basic System. Jul 2 09:08:04.554066 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:08:04.554100 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 09:08:04.555070 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 09:08:04.558231 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 09:08:04.559258 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 09:08:04.560920 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 09:08:04.569322 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 09:08:04.570595 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 09:08:04.571136 jq[1409]: false Jul 2 09:08:04.571771 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 09:08:04.577266 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 09:08:04.579555 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 09:08:04.582864 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 09:08:04.589692 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 09:08:04.590841 dbus-daemon[1408]: [system] SELinux support is enabled Jul 2 09:08:04.593411 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 09:08:04.593816 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 09:08:04.595702 extend-filesystems[1410]: Found loop3 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found loop4 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found loop5 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda1 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda2 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda3 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found usr Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda4 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda6 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda7 Jul 2 09:08:04.595702 extend-filesystems[1410]: Found vda9 Jul 2 09:08:04.595702 extend-filesystems[1410]: Checking size of /dev/vda9 Jul 2 09:08:04.617460 extend-filesystems[1410]: Resized partition /dev/vda9 Jul 2 09:08:04.600239 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 09:08:04.602770 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 09:08:04.604696 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 09:08:04.618824 jq[1428]: true Jul 2 09:08:04.609292 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 09:08:04.612380 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 09:08:04.612524 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 09:08:04.612777 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 09:08:04.612910 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 09:08:04.624062 extend-filesystems[1433]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 09:08:04.630790 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 09:08:04.625979 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 09:08:04.626450 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 09:08:04.644959 update_engine[1422]: I0702 09:08:04.641558 1422 main.cc:92] Flatcar Update Engine starting Jul 2 09:08:04.651510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1363) Jul 2 09:08:04.643136 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 09:08:04.643167 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 09:08:04.644621 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 09:08:04.644641 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 09:08:04.652273 update_engine[1422]: I0702 09:08:04.652226 1422 update_check_scheduler.cc:74] Next update check in 11m3s Jul 2 09:08:04.653387 jq[1434]: true Jul 2 09:08:04.654817 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 09:08:04.671744 tar[1432]: linux-arm64/helm Jul 2 09:08:04.674289 systemd[1]: Started update-engine.service - Update Engine. Jul 2 09:08:04.674677 systemd-logind[1421]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 09:08:04.676632 systemd-logind[1421]: New seat seat0. Jul 2 09:08:04.679173 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 09:08:04.681193 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 09:08:04.687116 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 09:08:04.702289 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 09:08:04.702289 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 09:08:04.702289 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 09:08:04.707544 extend-filesystems[1410]: Resized filesystem in /dev/vda9 Jul 2 09:08:04.709955 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 09:08:04.711109 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 09:08:04.738542 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Jul 2 09:08:04.742100 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 09:08:04.744116 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 09:08:04.752746 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 09:08:04.858094 containerd[1436]: time="2024-07-02T09:08:04.856437000Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 09:08:04.885703 containerd[1436]: time="2024-07-02T09:08:04.885516560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 09:08:04.885703 containerd[1436]: time="2024-07-02T09:08:04.885568920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:08:04.887557 containerd[1436]: time="2024-07-02T09:08:04.887502120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:08:04.887557 containerd[1436]: time="2024-07-02T09:08:04.887551120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:08:04.887900 containerd[1436]: time="2024-07-02T09:08:04.887863360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:08:04.887900 containerd[1436]: time="2024-07-02T09:08:04.887895240Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 09:08:04.888096 containerd[1436]: time="2024-07-02T09:08:04.888077840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 09:08:04.888163 containerd[1436]: time="2024-07-02T09:08:04.888146880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:08:04.888183 containerd[1436]: time="2024-07-02T09:08:04.888164640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 09:08:04.888323 containerd[1436]: time="2024-07-02T09:08:04.888305120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:08:04.888673 containerd[1436]: time="2024-07-02T09:08:04.888638800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 09:08:04.888705 containerd[1436]: time="2024-07-02T09:08:04.888671920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 09:08:04.888705 containerd[1436]: time="2024-07-02T09:08:04.888683440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 09:08:04.888867 containerd[1436]: time="2024-07-02T09:08:04.888844680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 09:08:04.888893 containerd[1436]: time="2024-07-02T09:08:04.888868000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 09:08:04.889004 containerd[1436]: time="2024-07-02T09:08:04.888983560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 09:08:04.889024 containerd[1436]: time="2024-07-02T09:08:04.889004840Z" level=info msg="metadata content store policy set" policy=shared Jul 2 09:08:04.892833 containerd[1436]: time="2024-07-02T09:08:04.892796160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 09:08:04.892884 containerd[1436]: time="2024-07-02T09:08:04.892842960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 09:08:04.892884 containerd[1436]: time="2024-07-02T09:08:04.892856880Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 09:08:04.892970 containerd[1436]: time="2024-07-02T09:08:04.892939400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 09:08:04.892970 containerd[1436]: time="2024-07-02T09:08:04.892966760Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 09:08:04.893056 containerd[1436]: time="2024-07-02T09:08:04.892978720Z" level=info msg="NRI interface is disabled by configuration." Jul 2 09:08:04.893082 containerd[1436]: time="2024-07-02T09:08:04.893064920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 09:08:04.893345 containerd[1436]: time="2024-07-02T09:08:04.893312160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 09:08:04.893383 containerd[1436]: time="2024-07-02T09:08:04.893353040Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 09:08:04.893383 containerd[1436]: time="2024-07-02T09:08:04.893369440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 09:08:04.893421 containerd[1436]: time="2024-07-02T09:08:04.893383640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 09:08:04.893421 containerd[1436]: time="2024-07-02T09:08:04.893397680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.893457 containerd[1436]: time="2024-07-02T09:08:04.893422800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.893457 containerd[1436]: time="2024-07-02T09:08:04.893437960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.893457 containerd[1436]: time="2024-07-02T09:08:04.893451040Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.893511 containerd[1436]: time="2024-07-02T09:08:04.893472080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.893572 containerd[1436]: time="2024-07-02T09:08:04.893488080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.893596 containerd[1436]: time="2024-07-02T09:08:04.893577400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.893596 containerd[1436]: time="2024-07-02T09:08:04.893592200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 09:08:04.893797 containerd[1436]: time="2024-07-02T09:08:04.893765760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 09:08:04.894257 containerd[1436]: time="2024-07-02T09:08:04.894187280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 09:08:04.894334 containerd[1436]: time="2024-07-02T09:08:04.894316400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894363 containerd[1436]: time="2024-07-02T09:08:04.894340360Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 09:08:04.894384 containerd[1436]: time="2024-07-02T09:08:04.894363720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 09:08:04.894718 containerd[1436]: time="2024-07-02T09:08:04.894686720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894802 containerd[1436]: time="2024-07-02T09:08:04.894786640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894822 containerd[1436]: time="2024-07-02T09:08:04.894807760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894841 containerd[1436]: time="2024-07-02T09:08:04.894821520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894895 containerd[1436]: time="2024-07-02T09:08:04.894834120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894915 containerd[1436]: time="2024-07-02T09:08:04.894904000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894938 containerd[1436]: time="2024-07-02T09:08:04.894918080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894938 containerd[1436]: time="2024-07-02T09:08:04.894929920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.894983 containerd[1436]: time="2024-07-02T09:08:04.894966840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 09:08:04.895408 containerd[1436]: time="2024-07-02T09:08:04.895375000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.895431 containerd[1436]: time="2024-07-02T09:08:04.895411360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.895431 containerd[1436]: time="2024-07-02T09:08:04.895425360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.895465 containerd[1436]: time="2024-07-02T09:08:04.895439760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.895523 containerd[1436]: time="2024-07-02T09:08:04.895503040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.895549 containerd[1436]: time="2024-07-02T09:08:04.895537120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.895569 containerd[1436]: time="2024-07-02T09:08:04.895552080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.895569 containerd[1436]: time="2024-07-02T09:08:04.895563400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 09:08:04.896026 containerd[1436]: time="2024-07-02T09:08:04.895967400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 09:08:04.896146 containerd[1436]: time="2024-07-02T09:08:04.896036440Z" level=info msg="Connect containerd service" Jul 2 09:08:04.896146 containerd[1436]: time="2024-07-02T09:08:04.896082240Z" level=info msg="using legacy CRI server" Jul 2 09:08:04.896146 containerd[1436]: time="2024-07-02T09:08:04.896091720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 09:08:04.896252 containerd[1436]: time="2024-07-02T09:08:04.896234920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 09:08:04.897312 containerd[1436]: time="2024-07-02T09:08:04.897273760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:08:04.897418 containerd[1436]: time="2024-07-02T09:08:04.897392280Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 09:08:04.897565 containerd[1436]: time="2024-07-02T09:08:04.897421200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 09:08:04.897620 containerd[1436]: time="2024-07-02T09:08:04.897603880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 09:08:04.897641 containerd[1436]: time="2024-07-02T09:08:04.897626200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 09:08:04.897767 containerd[1436]: time="2024-07-02T09:08:04.897518000Z" level=info msg="Start subscribing containerd event" Jul 2 09:08:04.897790 containerd[1436]: time="2024-07-02T09:08:04.897775240Z" level=info msg="Start recovering state" Jul 2 09:08:04.897850 containerd[1436]: time="2024-07-02T09:08:04.897837440Z" level=info msg="Start event monitor" Jul 2 09:08:04.897870 containerd[1436]: time="2024-07-02T09:08:04.897852880Z" level=info msg="Start snapshots syncer" Jul 2 09:08:04.897870 containerd[1436]: time="2024-07-02T09:08:04.897861600Z" level=info msg="Start cni network conf syncer for default" Jul 2 09:08:04.897870 containerd[1436]: time="2024-07-02T09:08:04.897868440Z" level=info msg="Start streaming server" Jul 2 09:08:04.898718 containerd[1436]: time="2024-07-02T09:08:04.898692960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 09:08:04.898765 containerd[1436]: time="2024-07-02T09:08:04.898749400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 09:08:04.898923 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 09:08:04.901154 containerd[1436]: time="2024-07-02T09:08:04.901115120Z" level=info msg="containerd successfully booted in 0.046602s" Jul 2 09:08:05.032258 tar[1432]: linux-arm64/LICENSE Jul 2 09:08:05.032354 tar[1432]: linux-arm64/README.md Jul 2 09:08:05.045129 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 09:08:05.223122 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 09:08:05.242922 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 09:08:05.261290 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 09:08:05.267436 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 09:08:05.269110 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 09:08:05.271874 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 09:08:05.285146 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 09:08:05.288285 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 09:08:05.290544 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 09:08:05.291979 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 09:08:05.529131 systemd-networkd[1381]: eth0: Gained IPv6LL Jul 2 09:08:05.532109 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 09:08:05.533509 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 09:08:05.545348 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 09:08:05.547680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:05.549569 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 09:08:05.564128 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 09:08:05.564308 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 09:08:05.566017 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 09:08:05.567334 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 09:08:06.148134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:06.149371 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 09:08:06.152575 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:08:06.153191 systemd[1]: Startup finished in 531ms (kernel) + 4.159s (initrd) + 3.325s (userspace) = 8.017s. Jul 2 09:08:06.710856 kubelet[1520]: E0702 09:08:06.710324 1520 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:08:06.713879 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:08:06.714033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:08:11.628677 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 09:08:11.629726 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:36778.service - OpenSSH per-connection server daemon (10.0.0.1:36778). Jul 2 09:08:11.684201 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 36778 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:08:11.685802 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:11.697338 systemd-logind[1421]: New session 1 of user core. Jul 2 09:08:11.698266 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 09:08:11.707353 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 09:08:11.716165 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 09:08:11.718153 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 09:08:11.724022 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:11.794264 systemd[1538]: Queued start job for default target default.target. Jul 2 09:08:11.804954 systemd[1538]: Created slice app.slice - User Application Slice. Jul 2 09:08:11.804981 systemd[1538]: Reached target paths.target - Paths. Jul 2 09:08:11.804993 systemd[1538]: Reached target timers.target - Timers. Jul 2 09:08:11.806079 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 09:08:11.815046 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 09:08:11.815114 systemd[1538]: Reached target sockets.target - Sockets. Jul 2 09:08:11.815126 systemd[1538]: Reached target basic.target - Basic System. Jul 2 09:08:11.815160 systemd[1538]: Reached target default.target - Main User Target. Jul 2 09:08:11.815184 systemd[1538]: Startup finished in 86ms. Jul 2 09:08:11.815409 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 09:08:11.816638 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 09:08:11.874452 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:36780.service - OpenSSH per-connection server daemon (10.0.0.1:36780). Jul 2 09:08:11.920295 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 36780 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:08:11.921420 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:11.925120 systemd-logind[1421]: New session 2 of user core. Jul 2 09:08:11.938217 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 09:08:11.988781 sshd[1549]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:11.999214 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:36780.service: Deactivated successfully. Jul 2 09:08:12.000906 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 09:08:12.002182 systemd-logind[1421]: Session 2 logged out. Waiting for processes to exit. Jul 2 09:08:12.003711 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:36786.service - OpenSSH per-connection server daemon (10.0.0.1:36786). Jul 2 09:08:12.004774 systemd-logind[1421]: Removed session 2. Jul 2 09:08:12.038873 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 36786 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:08:12.039957 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:12.043254 systemd-logind[1421]: New session 3 of user core. Jul 2 09:08:12.058212 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 09:08:12.105959 sshd[1556]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:12.116206 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:36786.service: Deactivated successfully. Jul 2 09:08:12.117543 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 09:08:12.120206 systemd-logind[1421]: Session 3 logged out. Waiting for processes to exit. Jul 2 09:08:12.121282 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:36802.service - OpenSSH per-connection server daemon (10.0.0.1:36802). Jul 2 09:08:12.121871 systemd-logind[1421]: Removed session 3. Jul 2 09:08:12.156767 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 36802 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:08:12.157824 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:12.160936 systemd-logind[1421]: New session 4 of user core. Jul 2 09:08:12.167260 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 09:08:12.218345 sshd[1563]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:12.233455 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:36802.service: Deactivated successfully. Jul 2 09:08:12.234901 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 09:08:12.236112 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Jul 2 09:08:12.237384 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:36806.service - OpenSSH per-connection server daemon (10.0.0.1:36806). Jul 2 09:08:12.238083 systemd-logind[1421]: Removed session 4. Jul 2 09:08:12.273353 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 36806 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:08:12.274531 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:12.278273 systemd-logind[1421]: New session 5 of user core. Jul 2 09:08:12.288199 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 09:08:12.354962 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 09:08:12.355213 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:08:12.369804 sudo[1573]: pam_unix(sudo:session): session closed for user root Jul 2 09:08:12.371556 sshd[1570]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:12.378362 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:36806.service: Deactivated successfully. Jul 2 09:08:12.380399 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 09:08:12.381680 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Jul 2 09:08:12.382814 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:36816.service - OpenSSH per-connection server daemon (10.0.0.1:36816). Jul 2 09:08:12.383519 systemd-logind[1421]: Removed session 5. Jul 2 09:08:12.419708 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 36816 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:08:12.420951 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:12.425047 systemd-logind[1421]: New session 6 of user core. Jul 2 09:08:12.432213 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 09:08:12.483359 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 09:08:12.483596 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:08:12.486865 sudo[1582]: pam_unix(sudo:session): session closed for user root Jul 2 09:08:12.491299 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 09:08:12.491522 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:08:12.509438 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 09:08:12.510572 auditctl[1585]: No rules Jul 2 09:08:12.511420 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 09:08:12.511649 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 09:08:12.513374 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 09:08:12.535478 augenrules[1603]: No rules Jul 2 09:08:12.536662 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 09:08:12.537775 sudo[1581]: pam_unix(sudo:session): session closed for user root Jul 2 09:08:12.539453 sshd[1578]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:12.547191 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:36816.service: Deactivated successfully. Jul 2 09:08:12.548513 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 09:08:12.549740 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Jul 2 09:08:12.550889 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:36818.service - OpenSSH per-connection server daemon (10.0.0.1:36818). Jul 2 09:08:12.551647 systemd-logind[1421]: Removed session 6. Jul 2 09:08:12.587520 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 36818 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:08:12.588795 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:08:12.593022 systemd-logind[1421]: New session 7 of user core. Jul 2 09:08:12.601236 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 09:08:12.653018 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 09:08:12.653584 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 09:08:12.756280 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 09:08:12.756442 (dockerd)[1624]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 09:08:13.007221 dockerd[1624]: time="2024-07-02T09:08:13.007100686Z" level=info msg="Starting up" Jul 2 09:08:13.097291 dockerd[1624]: time="2024-07-02T09:08:13.097251779Z" level=info msg="Loading containers: start." Jul 2 09:08:13.182100 kernel: Initializing XFRM netlink socket Jul 2 09:08:13.238762 systemd-networkd[1381]: docker0: Link UP Jul 2 09:08:13.246647 dockerd[1624]: time="2024-07-02T09:08:13.246608707Z" level=info msg="Loading containers: done." Jul 2 09:08:13.298184 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2072586061-merged.mount: Deactivated successfully. Jul 2 09:08:13.299782 dockerd[1624]: time="2024-07-02T09:08:13.299278280Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 09:08:13.299782 dockerd[1624]: time="2024-07-02T09:08:13.299466395Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 09:08:13.299782 dockerd[1624]: time="2024-07-02T09:08:13.299573381Z" level=info msg="Daemon has completed initialization" Jul 2 09:08:13.326259 dockerd[1624]: time="2024-07-02T09:08:13.326204884Z" level=info msg="API listen on /run/docker.sock" Jul 2 09:08:13.326361 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 09:08:13.882390 containerd[1436]: time="2024-07-02T09:08:13.882350360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 09:08:14.535192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486099618.mount: Deactivated successfully. Jul 2 09:08:15.703760 containerd[1436]: time="2024-07-02T09:08:15.703700518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:15.704748 containerd[1436]: time="2024-07-02T09:08:15.704538285Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=32256349" Jul 2 09:08:15.705493 containerd[1436]: time="2024-07-02T09:08:15.705427930Z" level=info msg="ImageCreate event name:\"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:15.708356 containerd[1436]: time="2024-07-02T09:08:15.708325411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:15.709561 containerd[1436]: time="2024-07-02T09:08:15.709511241Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"32253147\" in 1.826748875s" Jul 2 09:08:15.709561 containerd[1436]: time="2024-07-02T09:08:15.709548486Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:46bfddf397d499c68edd3a505a02ab6b7a77acc6cbab684122699693c44fdc8a\"" Jul 2 09:08:15.729155 containerd[1436]: time="2024-07-02T09:08:15.729123525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 09:08:16.964454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 09:08:16.973236 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:17.109475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:17.112793 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:08:17.313766 containerd[1436]: time="2024-07-02T09:08:17.313649761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:17.314704 containerd[1436]: time="2024-07-02T09:08:17.314457741Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=29228086" Jul 2 09:08:17.315537 containerd[1436]: time="2024-07-02T09:08:17.315500901Z" level=info msg="ImageCreate event name:\"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:17.320534 containerd[1436]: time="2024-07-02T09:08:17.320488319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:17.323170 containerd[1436]: time="2024-07-02T09:08:17.323129815Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"30685210\" in 1.593969427s" Jul 2 09:08:17.323170 containerd[1436]: time="2024-07-02T09:08:17.323165545Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:9df0eeeacdd8f3cd9f3c3a08fbdfd665da4283115b53bf8b5d434382c02230a8\"" Jul 2 09:08:17.325139 kubelet[1837]: E0702 09:08:17.325095 1837 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:08:17.328851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:08:17.328986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:08:17.342653 containerd[1436]: time="2024-07-02T09:08:17.342624901Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 09:08:18.322161 containerd[1436]: time="2024-07-02T09:08:18.322107308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:18.322670 containerd[1436]: time="2024-07-02T09:08:18.322636266Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=15578350" Jul 2 09:08:18.323332 containerd[1436]: time="2024-07-02T09:08:18.323298387Z" level=info msg="ImageCreate event name:\"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:18.328098 containerd[1436]: time="2024-07-02T09:08:18.326689084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:18.328098 containerd[1436]: time="2024-07-02T09:08:18.327829221Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"17035492\" in 985.170046ms" Jul 2 09:08:18.328098 containerd[1436]: time="2024-07-02T09:08:18.327858813Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:4d823a436d04c2aac5c8e0dd5a83efa81f1917a3c017feabc4917150cb90fa29\"" Jul 2 09:08:18.347323 containerd[1436]: time="2024-07-02T09:08:18.347287335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 09:08:20.679824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404685313.mount: Deactivated successfully. Jul 2 09:08:21.075260 containerd[1436]: time="2024-07-02T09:08:21.074803320Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:21.076024 containerd[1436]: time="2024-07-02T09:08:21.075793213Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=25052712" Jul 2 09:08:21.077165 containerd[1436]: time="2024-07-02T09:08:21.077055283Z" level=info msg="ImageCreate event name:\"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:21.080876 containerd[1436]: time="2024-07-02T09:08:21.080840612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:21.082302 containerd[1436]: time="2024-07-02T09:08:21.082256372Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"25051729\" in 2.734927726s" Jul 2 09:08:21.082393 containerd[1436]: time="2024-07-02T09:08:21.082305501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:a75156450625cf630b7b9b1e8b7d881969131638181257d0d67db0876a25b32f\"" Jul 2 09:08:21.110601 containerd[1436]: time="2024-07-02T09:08:21.110559015Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 09:08:21.683956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055822298.mount: Deactivated successfully. Jul 2 09:08:22.245629 containerd[1436]: time="2024-07-02T09:08:22.245430404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:22.246417 containerd[1436]: time="2024-07-02T09:08:22.246123737Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jul 2 09:08:22.247071 containerd[1436]: time="2024-07-02T09:08:22.246965436Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:22.250017 containerd[1436]: time="2024-07-02T09:08:22.249989000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:22.252763 containerd[1436]: time="2024-07-02T09:08:22.252341619Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.141738096s" Jul 2 09:08:22.252763 containerd[1436]: time="2024-07-02T09:08:22.252382422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 09:08:22.271316 containerd[1436]: time="2024-07-02T09:08:22.271281491Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 09:08:22.690903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292388843.mount: Deactivated successfully. Jul 2 09:08:22.695350 containerd[1436]: time="2024-07-02T09:08:22.695294890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:22.695811 containerd[1436]: time="2024-07-02T09:08:22.695759372Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 09:08:22.696717 containerd[1436]: time="2024-07-02T09:08:22.696683481Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:22.703783 containerd[1436]: time="2024-07-02T09:08:22.703733394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:22.704585 containerd[1436]: time="2024-07-02T09:08:22.704545284Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 433.226681ms" Jul 2 09:08:22.704585 containerd[1436]: time="2024-07-02T09:08:22.704578103Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 09:08:22.723479 containerd[1436]: time="2024-07-02T09:08:22.723444834Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 09:08:23.245625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558744482.mount: Deactivated successfully. Jul 2 09:08:25.985890 containerd[1436]: time="2024-07-02T09:08:25.985843065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:25.986807 containerd[1436]: time="2024-07-02T09:08:25.986665088Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Jul 2 09:08:25.987416 containerd[1436]: time="2024-07-02T09:08:25.987390115Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:25.991388 containerd[1436]: time="2024-07-02T09:08:25.991341311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:25.992507 containerd[1436]: time="2024-07-02T09:08:25.992452519Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.268970943s" Jul 2 09:08:25.992507 containerd[1436]: time="2024-07-02T09:08:25.992484985Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 09:08:27.579454 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 09:08:27.589303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:27.671152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:27.674885 (kubelet)[2063]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 09:08:27.713063 kubelet[2063]: E0702 09:08:27.712998 2063 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 09:08:27.715949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 09:08:27.716112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 09:08:30.470398 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:30.481250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:30.497028 systemd[1]: Reloading requested from client PID 2079 ('systemctl') (unit session-7.scope)... Jul 2 09:08:30.497042 systemd[1]: Reloading... Jul 2 09:08:30.566094 zram_generator::config[2117]: No configuration found. Jul 2 09:08:30.642865 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:08:30.696018 systemd[1]: Reloading finished in 198 ms. Jul 2 09:08:30.736649 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:30.738921 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:08:30.739208 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:30.740553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:30.830252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:30.834233 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:08:30.870771 kubelet[2163]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:08:30.870771 kubelet[2163]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:08:30.870771 kubelet[2163]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:08:30.871090 kubelet[2163]: I0702 09:08:30.870809 2163 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:08:31.474111 kubelet[2163]: I0702 09:08:31.473215 2163 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 09:08:31.474111 kubelet[2163]: I0702 09:08:31.473247 2163 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:08:31.474111 kubelet[2163]: I0702 09:08:31.473445 2163 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 09:08:31.507328 kubelet[2163]: E0702 09:08:31.507259 2163 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.507328 kubelet[2163]: I0702 09:08:31.507299 2163 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:08:31.519325 kubelet[2163]: I0702 09:08:31.519296 2163 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:08:31.519571 kubelet[2163]: I0702 09:08:31.519546 2163 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:08:31.519720 kubelet[2163]: I0702 09:08:31.519706 2163 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:08:31.519790 kubelet[2163]: I0702 09:08:31.519729 2163 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:08:31.519790 kubelet[2163]: I0702 09:08:31.519737 2163 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:08:31.519890 kubelet[2163]: I0702 09:08:31.519831 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:08:31.523746 kubelet[2163]: I0702 09:08:31.523722 2163 kubelet.go:396] "Attempting to sync node with API server" Jul 2 09:08:31.523746 kubelet[2163]: I0702 09:08:31.523745 2163 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:08:31.523811 kubelet[2163]: I0702 09:08:31.523764 2163 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:08:31.523811 kubelet[2163]: I0702 09:08:31.523777 2163 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:08:31.524210 kubelet[2163]: W0702 09:08:31.524165 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.524271 kubelet[2163]: E0702 09:08:31.524217 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.525077 kubelet[2163]: W0702 09:08:31.524358 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.525077 kubelet[2163]: E0702 09:08:31.524391 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.528086 kubelet[2163]: I0702 09:08:31.527767 2163 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:08:31.531916 kubelet[2163]: I0702 09:08:31.531894 2163 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:08:31.532026 kubelet[2163]: W0702 09:08:31.532006 2163 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 09:08:31.532857 kubelet[2163]: I0702 09:08:31.532747 2163 server.go:1256] "Started kubelet" Jul 2 09:08:31.532921 kubelet[2163]: I0702 09:08:31.532906 2163 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:08:31.533715 kubelet[2163]: I0702 09:08:31.533686 2163 server.go:461] "Adding debug handlers to kubelet server" Jul 2 09:08:31.535043 kubelet[2163]: I0702 09:08:31.534620 2163 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:08:31.535043 kubelet[2163]: I0702 09:08:31.534884 2163 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:08:31.536175 kubelet[2163]: I0702 09:08:31.536022 2163 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:08:31.536842 kubelet[2163]: E0702 09:08:31.536806 2163 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:08:31.536842 kubelet[2163]: I0702 09:08:31.536836 2163 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:08:31.536996 kubelet[2163]: I0702 09:08:31.536945 2163 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:08:31.537020 kubelet[2163]: I0702 09:08:31.537003 2163 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:08:31.537791 kubelet[2163]: W0702 09:08:31.537258 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.537791 kubelet[2163]: E0702 09:08:31.537298 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.538096 kubelet[2163]: E0702 09:08:31.537997 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Jul 2 09:08:31.538392 kubelet[2163]: E0702 09:08:31.538371 2163 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:08:31.539280 kubelet[2163]: E0702 09:08:31.539257 2163 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de5a39b11565a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 09:08:31.532713381 +0000 UTC m=+0.695108176,LastTimestamp:2024-07-02 09:08:31.532713381 +0000 UTC m=+0.695108176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 09:08:31.539412 kubelet[2163]: I0702 09:08:31.539310 2163 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:08:31.539552 kubelet[2163]: I0702 09:08:31.539531 2163 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:08:31.540497 kubelet[2163]: I0702 09:08:31.540478 2163 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:08:31.548754 kubelet[2163]: I0702 09:08:31.548714 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:08:31.550219 kubelet[2163]: I0702 09:08:31.550192 2163 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:08:31.550219 kubelet[2163]: I0702 09:08:31.550215 2163 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:08:31.550294 kubelet[2163]: I0702 09:08:31.550229 2163 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 09:08:31.550294 kubelet[2163]: E0702 09:08:31.550276 2163 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:08:31.553471 kubelet[2163]: W0702 09:08:31.553431 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.553471 kubelet[2163]: E0702 09:08:31.553474 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:31.553780 kubelet[2163]: I0702 09:08:31.553756 2163 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:08:31.553780 kubelet[2163]: I0702 09:08:31.553772 2163 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:08:31.553834 kubelet[2163]: I0702 09:08:31.553787 2163 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:08:31.555295 kubelet[2163]: I0702 09:08:31.555266 2163 policy_none.go:49] "None policy: Start" Jul 2 09:08:31.555752 kubelet[2163]: I0702 09:08:31.555725 2163 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:08:31.555786 kubelet[2163]: I0702 09:08:31.555769 2163 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:08:31.564988 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 09:08:31.578735 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 09:08:31.581310 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 09:08:31.596779 kubelet[2163]: I0702 09:08:31.596690 2163 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:08:31.597042 kubelet[2163]: I0702 09:08:31.596916 2163 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:08:31.598072 kubelet[2163]: E0702 09:08:31.597984 2163 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 09:08:31.638355 kubelet[2163]: I0702 09:08:31.638329 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:08:31.638741 kubelet[2163]: E0702 09:08:31.638720 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 2 09:08:31.651041 kubelet[2163]: I0702 09:08:31.651004 2163 topology_manager.go:215] "Topology Admit Handler" podUID="f0c584b3c33aee5868506dfb297c9b5b" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:08:31.651864 kubelet[2163]: I0702 09:08:31.651836 2163 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:08:31.652683 kubelet[2163]: I0702 09:08:31.652660 2163 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:08:31.657580 systemd[1]: Created slice kubepods-burstable-podf0c584b3c33aee5868506dfb297c9b5b.slice - libcontainer container kubepods-burstable-podf0c584b3c33aee5868506dfb297c9b5b.slice. Jul 2 09:08:31.679906 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 09:08:31.683510 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 09:08:31.739253 kubelet[2163]: E0702 09:08:31.739179 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Jul 2 09:08:31.838821 kubelet[2163]: I0702 09:08:31.838724 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0c584b3c33aee5868506dfb297c9b5b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f0c584b3c33aee5868506dfb297c9b5b\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:08:31.838821 kubelet[2163]: I0702 09:08:31.838760 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0c584b3c33aee5868506dfb297c9b5b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f0c584b3c33aee5868506dfb297c9b5b\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:08:31.838821 kubelet[2163]: I0702 09:08:31.838793 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:31.838821 kubelet[2163]: I0702 09:08:31.838814 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:31.838821 kubelet[2163]: I0702 09:08:31.838834 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:31.839019 kubelet[2163]: I0702 09:08:31.838868 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0c584b3c33aee5868506dfb297c9b5b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f0c584b3c33aee5868506dfb297c9b5b\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:08:31.839019 kubelet[2163]: I0702 09:08:31.838915 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:31.839019 kubelet[2163]: I0702 09:08:31.838951 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:31.839019 kubelet[2163]: I0702 09:08:31.838976 2163 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:08:31.839759 kubelet[2163]: I0702 09:08:31.839666 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:08:31.839973 kubelet[2163]: E0702 09:08:31.839955 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 2 09:08:31.978669 kubelet[2163]: E0702 09:08:31.978634 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:31.981117 containerd[1436]: time="2024-07-02T09:08:31.981074857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f0c584b3c33aee5868506dfb297c9b5b,Namespace:kube-system,Attempt:0,}" Jul 2 09:08:31.982309 kubelet[2163]: E0702 09:08:31.982275 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:31.983471 containerd[1436]: time="2024-07-02T09:08:31.983183854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 09:08:31.985210 kubelet[2163]: E0702 09:08:31.985183 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:31.985942 containerd[1436]: time="2024-07-02T09:08:31.985872258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 09:08:32.139759 kubelet[2163]: E0702 09:08:32.139660 2163 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Jul 2 09:08:32.241062 kubelet[2163]: I0702 09:08:32.241022 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:08:32.241380 kubelet[2163]: E0702 09:08:32.241363 2163 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 2 09:08:32.395637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3443327402.mount: Deactivated successfully. Jul 2 09:08:32.400483 containerd[1436]: time="2024-07-02T09:08:32.400434865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:08:32.401623 containerd[1436]: time="2024-07-02T09:08:32.401578975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:08:32.402194 containerd[1436]: time="2024-07-02T09:08:32.402170886Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:08:32.403165 containerd[1436]: time="2024-07-02T09:08:32.403116678Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:08:32.404153 containerd[1436]: time="2024-07-02T09:08:32.404125121Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:08:32.404710 containerd[1436]: time="2024-07-02T09:08:32.404597016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 09:08:32.405218 containerd[1436]: time="2024-07-02T09:08:32.405189167Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 09:08:32.407758 containerd[1436]: time="2024-07-02T09:08:32.407703928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 09:08:32.409759 containerd[1436]: time="2024-07-02T09:08:32.409735424Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 428.557955ms" Jul 2 09:08:32.410423 containerd[1436]: time="2024-07-02T09:08:32.410394108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 424.417876ms" Jul 2 09:08:32.411107 containerd[1436]: time="2024-07-02T09:08:32.411037741Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 427.767489ms" Jul 2 09:08:32.461671 kubelet[2163]: W0702 09:08:32.461621 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:32.461801 kubelet[2163]: E0702 09:08:32.461789 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:32.486020 kubelet[2163]: W0702 09:08:32.485971 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:32.486160 kubelet[2163]: E0702 09:08:32.486149 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:32.504614 kubelet[2163]: W0702 09:08:32.504566 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:32.504614 kubelet[2163]: E0702 09:08:32.504618 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:32.574298 containerd[1436]: time="2024-07-02T09:08:32.574155439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:08:32.574298 containerd[1436]: time="2024-07-02T09:08:32.574223093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:32.574298 containerd[1436]: time="2024-07-02T09:08:32.574236583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:08:32.574298 containerd[1436]: time="2024-07-02T09:08:32.574246391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:32.575395 containerd[1436]: time="2024-07-02T09:08:32.575248589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:08:32.575395 containerd[1436]: time="2024-07-02T09:08:32.575301310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:32.575395 containerd[1436]: time="2024-07-02T09:08:32.575315001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:08:32.575395 containerd[1436]: time="2024-07-02T09:08:32.575324329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:32.579573 containerd[1436]: time="2024-07-02T09:08:32.579373190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:08:32.579726 containerd[1436]: time="2024-07-02T09:08:32.579655054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:32.579726 containerd[1436]: time="2024-07-02T09:08:32.579686279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:08:32.579887 containerd[1436]: time="2024-07-02T09:08:32.579701451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:32.594197 systemd[1]: Started cri-containerd-235fe16a3ec0620766c5b0d6a68cda42775ba928ddacadcd253999ad5908290f.scope - libcontainer container 235fe16a3ec0620766c5b0d6a68cda42775ba928ddacadcd253999ad5908290f. Jul 2 09:08:32.595205 systemd[1]: Started cri-containerd-5085d29e1908f95457f3c0abccd504348853558582b819fd185cd7d1c36b5c86.scope - libcontainer container 5085d29e1908f95457f3c0abccd504348853558582b819fd185cd7d1c36b5c86. Jul 2 09:08:32.597438 systemd[1]: Started cri-containerd-468890650d089b35c70c054f23dcc01fb52a2e88c9e9fc725f55fab81a39df63.scope - libcontainer container 468890650d089b35c70c054f23dcc01fb52a2e88c9e9fc725f55fab81a39df63. Jul 2 09:08:32.627251 containerd[1436]: time="2024-07-02T09:08:32.627211491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"235fe16a3ec0620766c5b0d6a68cda42775ba928ddacadcd253999ad5908290f\"" Jul 2 09:08:32.628241 kubelet[2163]: E0702 09:08:32.628123 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:32.631733 containerd[1436]: time="2024-07-02T09:08:32.631341937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5085d29e1908f95457f3c0abccd504348853558582b819fd185cd7d1c36b5c86\"" Jul 2 09:08:32.631733 containerd[1436]: time="2024-07-02T09:08:32.631637012Z" level=info msg="CreateContainer within sandbox \"235fe16a3ec0620766c5b0d6a68cda42775ba928ddacadcd253999ad5908290f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 09:08:32.631829 kubelet[2163]: E0702 09:08:32.631743 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:32.633661 containerd[1436]: time="2024-07-02T09:08:32.633513625Z" level=info msg="CreateContainer within sandbox \"5085d29e1908f95457f3c0abccd504348853558582b819fd185cd7d1c36b5c86\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 09:08:32.633922 containerd[1436]: time="2024-07-02T09:08:32.633762183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f0c584b3c33aee5868506dfb297c9b5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"468890650d089b35c70c054f23dcc01fb52a2e88c9e9fc725f55fab81a39df63\"" Jul 2 09:08:32.634522 kubelet[2163]: E0702 09:08:32.634504 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:32.636406 containerd[1436]: time="2024-07-02T09:08:32.636374461Z" level=info msg="CreateContainer within sandbox \"468890650d089b35c70c054f23dcc01fb52a2e88c9e9fc725f55fab81a39df63\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 09:08:32.648997 containerd[1436]: time="2024-07-02T09:08:32.648898585Z" level=info msg="CreateContainer within sandbox \"235fe16a3ec0620766c5b0d6a68cda42775ba928ddacadcd253999ad5908290f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f73d29a01e719d57806aa82f276569dda0afd15fddba5b838c8693d704427d7f\"" Jul 2 09:08:32.649590 containerd[1436]: time="2024-07-02T09:08:32.649566957Z" level=info msg="StartContainer for \"f73d29a01e719d57806aa82f276569dda0afd15fddba5b838c8693d704427d7f\"" Jul 2 09:08:32.652895 containerd[1436]: time="2024-07-02T09:08:32.652857575Z" level=info msg="CreateContainer within sandbox \"468890650d089b35c70c054f23dcc01fb52a2e88c9e9fc725f55fab81a39df63\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"312d2b30781a6689d7d53d5d01e7b5a306de95372399af13eb04d86c510c0f92\"" Jul 2 09:08:32.653284 containerd[1436]: time="2024-07-02T09:08:32.653259495Z" level=info msg="StartContainer for \"312d2b30781a6689d7d53d5d01e7b5a306de95372399af13eb04d86c510c0f92\"" Jul 2 09:08:32.653443 containerd[1436]: time="2024-07-02T09:08:32.653418061Z" level=info msg="CreateContainer within sandbox \"5085d29e1908f95457f3c0abccd504348853558582b819fd185cd7d1c36b5c86\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e10ea8bddcd2522ebfd41bbbec37a3e7f50d7ed1399be6f37a6f86a617106ceb\"" Jul 2 09:08:32.653800 containerd[1436]: time="2024-07-02T09:08:32.653775465Z" level=info msg="StartContainer for \"e10ea8bddcd2522ebfd41bbbec37a3e7f50d7ed1399be6f37a6f86a617106ceb\"" Jul 2 09:08:32.676200 systemd[1]: Started cri-containerd-f73d29a01e719d57806aa82f276569dda0afd15fddba5b838c8693d704427d7f.scope - libcontainer container f73d29a01e719d57806aa82f276569dda0afd15fddba5b838c8693d704427d7f. Jul 2 09:08:32.679554 systemd[1]: Started cri-containerd-e10ea8bddcd2522ebfd41bbbec37a3e7f50d7ed1399be6f37a6f86a617106ceb.scope - libcontainer container e10ea8bddcd2522ebfd41bbbec37a3e7f50d7ed1399be6f37a6f86a617106ceb. Jul 2 09:08:32.683767 systemd[1]: Started cri-containerd-312d2b30781a6689d7d53d5d01e7b5a306de95372399af13eb04d86c510c0f92.scope - libcontainer container 312d2b30781a6689d7d53d5d01e7b5a306de95372399af13eb04d86c510c0f92. Jul 2 09:08:32.733941 containerd[1436]: time="2024-07-02T09:08:32.729963081Z" level=info msg="StartContainer for \"f73d29a01e719d57806aa82f276569dda0afd15fddba5b838c8693d704427d7f\" returns successfully" Jul 2 09:08:32.733941 containerd[1436]: time="2024-07-02T09:08:32.730124450Z" level=info msg="StartContainer for \"e10ea8bddcd2522ebfd41bbbec37a3e7f50d7ed1399be6f37a6f86a617106ceb\" returns successfully" Jul 2 09:08:32.733941 containerd[1436]: time="2024-07-02T09:08:32.730157156Z" level=info msg="StartContainer for \"312d2b30781a6689d7d53d5d01e7b5a306de95372399af13eb04d86c510c0f92\" returns successfully" Jul 2 09:08:32.911173 kubelet[2163]: W0702 09:08:32.911022 2163 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:32.911173 kubelet[2163]: E0702 09:08:32.911105 2163 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 2 09:08:33.042908 kubelet[2163]: I0702 09:08:33.042862 2163 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:08:33.560196 kubelet[2163]: E0702 09:08:33.560162 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:33.563401 kubelet[2163]: E0702 09:08:33.563380 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:33.564015 kubelet[2163]: E0702 09:08:33.563994 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:34.377585 kubelet[2163]: E0702 09:08:34.377543 2163 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 09:08:34.467436 kubelet[2163]: I0702 09:08:34.466327 2163 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 09:08:34.475721 kubelet[2163]: E0702 09:08:34.475685 2163 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:08:34.565129 kubelet[2163]: E0702 09:08:34.565009 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:34.565129 kubelet[2163]: E0702 09:08:34.565011 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:34.576225 kubelet[2163]: E0702 09:08:34.576198 2163 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:08:34.676915 kubelet[2163]: E0702 09:08:34.676611 2163 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:08:34.777138 kubelet[2163]: E0702 09:08:34.777096 2163 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:08:34.877745 kubelet[2163]: E0702 09:08:34.877701 2163 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:08:35.393659 kubelet[2163]: E0702 09:08:35.393633 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:35.526356 kubelet[2163]: I0702 09:08:35.526259 2163 apiserver.go:52] "Watching apiserver" Jul 2 09:08:35.537456 kubelet[2163]: I0702 09:08:35.537423 2163 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:08:35.565727 kubelet[2163]: E0702 09:08:35.565707 2163 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:36.971100 systemd[1]: Reloading requested from client PID 2438 ('systemctl') (unit session-7.scope)... Jul 2 09:08:36.971118 systemd[1]: Reloading... Jul 2 09:08:37.035101 zram_generator::config[2478]: No configuration found. Jul 2 09:08:37.113568 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 09:08:37.178168 systemd[1]: Reloading finished in 206 ms. Jul 2 09:08:37.208192 kubelet[2163]: I0702 09:08:37.208165 2163 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:08:37.208315 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:37.213506 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 09:08:37.214560 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:37.227427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 09:08:37.313446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 09:08:37.317512 (kubelet)[2517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 09:08:37.357370 kubelet[2517]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:08:37.357370 kubelet[2517]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 09:08:37.357370 kubelet[2517]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 09:08:37.357693 kubelet[2517]: I0702 09:08:37.357414 2517 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 09:08:37.361395 kubelet[2517]: I0702 09:08:37.361273 2517 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 09:08:37.361395 kubelet[2517]: I0702 09:08:37.361301 2517 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 09:08:37.361596 kubelet[2517]: I0702 09:08:37.361576 2517 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 09:08:37.363096 kubelet[2517]: I0702 09:08:37.363068 2517 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 09:08:37.366993 kubelet[2517]: I0702 09:08:37.366539 2517 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 09:08:37.374161 kubelet[2517]: I0702 09:08:37.373462 2517 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 09:08:37.374161 kubelet[2517]: I0702 09:08:37.373676 2517 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 09:08:37.374161 kubelet[2517]: I0702 09:08:37.373835 2517 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 09:08:37.374161 kubelet[2517]: I0702 09:08:37.373851 2517 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 09:08:37.374161 kubelet[2517]: I0702 09:08:37.373863 2517 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 09:08:37.374161 kubelet[2517]: I0702 09:08:37.373904 2517 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:08:37.374405 kubelet[2517]: I0702 09:08:37.373998 2517 kubelet.go:396] "Attempting to sync node with API server" Jul 2 09:08:37.374405 kubelet[2517]: I0702 09:08:37.374011 2517 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 09:08:37.374405 kubelet[2517]: I0702 09:08:37.374032 2517 kubelet.go:312] "Adding apiserver pod source" Jul 2 09:08:37.374405 kubelet[2517]: I0702 09:08:37.374045 2517 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 09:08:37.381388 kubelet[2517]: I0702 09:08:37.375276 2517 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 09:08:37.381388 kubelet[2517]: I0702 09:08:37.375432 2517 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 09:08:37.381388 kubelet[2517]: I0702 09:08:37.375812 2517 server.go:1256] "Started kubelet" Jul 2 09:08:37.381388 kubelet[2517]: I0702 09:08:37.376692 2517 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 09:08:37.381388 kubelet[2517]: I0702 09:08:37.377271 2517 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 09:08:37.381388 kubelet[2517]: I0702 09:08:37.377424 2517 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 09:08:37.381388 kubelet[2517]: I0702 09:08:37.377590 2517 server.go:461] "Adding debug handlers to kubelet server" Jul 2 09:08:37.384041 kubelet[2517]: I0702 09:08:37.384000 2517 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 09:08:37.392057 kubelet[2517]: E0702 09:08:37.389305 2517 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 09:08:37.392057 kubelet[2517]: I0702 09:08:37.389343 2517 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 09:08:37.392057 kubelet[2517]: I0702 09:08:37.389445 2517 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 09:08:37.392057 kubelet[2517]: I0702 09:08:37.389568 2517 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 09:08:37.398783 kubelet[2517]: E0702 09:08:37.398171 2517 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 09:08:37.399698 kubelet[2517]: I0702 09:08:37.399653 2517 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 09:08:37.401760 kubelet[2517]: I0702 09:08:37.401722 2517 factory.go:221] Registration of the containerd container factory successfully Jul 2 09:08:37.403194 kubelet[2517]: I0702 09:08:37.401738 2517 factory.go:221] Registration of the systemd container factory successfully Jul 2 09:08:37.404222 kubelet[2517]: I0702 09:08:37.404109 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 09:08:37.405212 kubelet[2517]: I0702 09:08:37.404944 2517 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 09:08:37.405212 kubelet[2517]: I0702 09:08:37.404963 2517 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 09:08:37.405212 kubelet[2517]: I0702 09:08:37.404979 2517 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 09:08:37.405212 kubelet[2517]: E0702 09:08:37.405023 2517 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 09:08:37.437946 kubelet[2517]: I0702 09:08:37.437916 2517 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 09:08:37.437946 kubelet[2517]: I0702 09:08:37.437936 2517 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 09:08:37.437946 kubelet[2517]: I0702 09:08:37.437953 2517 state_mem.go:36] "Initialized new in-memory state store" Jul 2 09:08:37.438117 kubelet[2517]: I0702 09:08:37.438110 2517 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 09:08:37.438142 kubelet[2517]: I0702 09:08:37.438130 2517 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 09:08:37.438142 kubelet[2517]: I0702 09:08:37.438137 2517 policy_none.go:49] "None policy: Start" Jul 2 09:08:37.438729 kubelet[2517]: I0702 09:08:37.438668 2517 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 09:08:37.438729 kubelet[2517]: I0702 09:08:37.438694 2517 state_mem.go:35] "Initializing new in-memory state store" Jul 2 09:08:37.438879 kubelet[2517]: I0702 09:08:37.438863 2517 state_mem.go:75] "Updated machine memory state" Jul 2 09:08:37.442781 kubelet[2517]: I0702 09:08:37.442764 2517 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 09:08:37.443381 kubelet[2517]: I0702 09:08:37.442975 2517 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 09:08:37.493737 kubelet[2517]: I0702 09:08:37.493594 2517 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 09:08:37.499927 kubelet[2517]: I0702 09:08:37.499864 2517 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 09:08:37.500025 kubelet[2517]: I0702 09:08:37.499948 2517 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 09:08:37.505618 kubelet[2517]: I0702 09:08:37.505587 2517 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 09:08:37.505737 kubelet[2517]: I0702 09:08:37.505668 2517 topology_manager.go:215] "Topology Admit Handler" podUID="f0c584b3c33aee5868506dfb297c9b5b" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 09:08:37.505737 kubelet[2517]: I0702 09:08:37.505740 2517 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 09:08:37.511207 kubelet[2517]: E0702 09:08:37.511146 2517 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 09:08:37.592085 kubelet[2517]: I0702 09:08:37.592041 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:37.592085 kubelet[2517]: I0702 09:08:37.592094 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:37.592217 kubelet[2517]: I0702 09:08:37.592117 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 09:08:37.592217 kubelet[2517]: I0702 09:08:37.592136 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0c584b3c33aee5868506dfb297c9b5b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f0c584b3c33aee5868506dfb297c9b5b\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:08:37.592217 kubelet[2517]: I0702 09:08:37.592165 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0c584b3c33aee5868506dfb297c9b5b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f0c584b3c33aee5868506dfb297c9b5b\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:08:37.592217 kubelet[2517]: I0702 09:08:37.592183 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:37.592217 kubelet[2517]: I0702 09:08:37.592204 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0c584b3c33aee5868506dfb297c9b5b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f0c584b3c33aee5868506dfb297c9b5b\") " pod="kube-system/kube-apiserver-localhost" Jul 2 09:08:37.592346 kubelet[2517]: I0702 09:08:37.592225 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:37.592346 kubelet[2517]: I0702 09:08:37.592250 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 09:08:37.813128 kubelet[2517]: E0702 09:08:37.812750 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:37.814396 kubelet[2517]: E0702 09:08:37.813466 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:37.814396 kubelet[2517]: E0702 09:08:37.813834 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:38.375570 kubelet[2517]: I0702 09:08:38.375305 2517 apiserver.go:52] "Watching apiserver" Jul 2 09:08:38.389784 kubelet[2517]: I0702 09:08:38.389743 2517 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 09:08:38.419859 kubelet[2517]: E0702 09:08:38.419810 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:38.427628 kubelet[2517]: E0702 09:08:38.427060 2517 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 09:08:38.427628 kubelet[2517]: E0702 09:08:38.427357 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:38.430917 kubelet[2517]: E0702 09:08:38.430651 2517 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 09:08:38.431265 kubelet[2517]: E0702 09:08:38.431248 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:38.444505 kubelet[2517]: I0702 09:08:38.444478 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.444425181 podStartE2EDuration="3.444425181s" podCreationTimestamp="2024-07-02 09:08:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:08:38.44442394 +0000 UTC m=+1.123795917" watchObservedRunningTime="2024-07-02 09:08:38.444425181 +0000 UTC m=+1.123797158" Jul 2 09:08:38.462910 kubelet[2517]: I0702 09:08:38.462849 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.462816371 podStartE2EDuration="1.462816371s" podCreationTimestamp="2024-07-02 09:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:08:38.462522408 +0000 UTC m=+1.141894385" watchObservedRunningTime="2024-07-02 09:08:38.462816371 +0000 UTC m=+1.142188348" Jul 2 09:08:38.463071 kubelet[2517]: I0702 09:08:38.462935 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.462918894 podStartE2EDuration="1.462918894s" podCreationTimestamp="2024-07-02 09:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:08:38.456070983 +0000 UTC m=+1.135442960" watchObservedRunningTime="2024-07-02 09:08:38.462918894 +0000 UTC m=+1.142290871" Jul 2 09:08:39.421104 kubelet[2517]: E0702 09:08:39.420997 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:39.421104 kubelet[2517]: E0702 09:08:39.421004 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:40.628980 kubelet[2517]: E0702 09:08:40.628943 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:41.387984 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 2 09:08:41.389634 sshd[1611]: pam_unix(sshd:session): session closed for user core Jul 2 09:08:41.392767 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:36818.service: Deactivated successfully. Jul 2 09:08:41.394537 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 09:08:41.394829 systemd[1]: session-7.scope: Consumed 6.511s CPU time, 137.3M memory peak, 0B memory swap peak. Jul 2 09:08:41.395957 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Jul 2 09:08:41.396917 systemd-logind[1421]: Removed session 7. Jul 2 09:08:42.719594 kubelet[2517]: E0702 09:08:42.719494 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:43.428175 kubelet[2517]: E0702 09:08:43.427925 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:45.338153 kubelet[2517]: E0702 09:08:45.338118 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:45.431175 kubelet[2517]: E0702 09:08:45.431098 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:50.141084 update_engine[1422]: I0702 09:08:50.140489 1422 update_attempter.cc:509] Updating boot flags... Jul 2 09:08:50.161115 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2617) Jul 2 09:08:50.195321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2615) Jul 2 09:08:50.636353 kubelet[2517]: E0702 09:08:50.636252 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:51.941713 kubelet[2517]: I0702 09:08:51.941556 2517 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 09:08:51.942120 kubelet[2517]: I0702 09:08:51.942042 2517 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 09:08:51.942150 containerd[1436]: time="2024-07-02T09:08:51.941864033Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 09:08:52.918972 kubelet[2517]: I0702 09:08:52.918930 2517 topology_manager.go:215] "Topology Admit Handler" podUID="e834f16a-e537-491d-aaa6-36885172f428" podNamespace="kube-system" podName="kube-proxy-649zf" Jul 2 09:08:52.933352 systemd[1]: Created slice kubepods-besteffort-pode834f16a_e537_491d_aaa6_36885172f428.slice - libcontainer container kubepods-besteffort-pode834f16a_e537_491d_aaa6_36885172f428.slice. Jul 2 09:08:52.998896 kubelet[2517]: I0702 09:08:52.998850 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e834f16a-e537-491d-aaa6-36885172f428-lib-modules\") pod \"kube-proxy-649zf\" (UID: \"e834f16a-e537-491d-aaa6-36885172f428\") " pod="kube-system/kube-proxy-649zf" Jul 2 09:08:52.998896 kubelet[2517]: I0702 09:08:52.998898 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e834f16a-e537-491d-aaa6-36885172f428-kube-proxy\") pod \"kube-proxy-649zf\" (UID: \"e834f16a-e537-491d-aaa6-36885172f428\") " pod="kube-system/kube-proxy-649zf" Jul 2 09:08:52.999288 kubelet[2517]: I0702 09:08:52.998929 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e834f16a-e537-491d-aaa6-36885172f428-xtables-lock\") pod \"kube-proxy-649zf\" (UID: \"e834f16a-e537-491d-aaa6-36885172f428\") " pod="kube-system/kube-proxy-649zf" Jul 2 09:08:52.999288 kubelet[2517]: I0702 09:08:52.998951 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-528q7\" (UniqueName: \"kubernetes.io/projected/e834f16a-e537-491d-aaa6-36885172f428-kube-api-access-528q7\") pod \"kube-proxy-649zf\" (UID: \"e834f16a-e537-491d-aaa6-36885172f428\") " pod="kube-system/kube-proxy-649zf" Jul 2 09:08:53.086623 kubelet[2517]: I0702 09:08:53.086589 2517 topology_manager.go:215] "Topology Admit Handler" podUID="e45aa07e-b704-4261-8c52-1b2e1dc7d517" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-889s4" Jul 2 09:08:53.095661 systemd[1]: Created slice kubepods-besteffort-pode45aa07e_b704_4261_8c52_1b2e1dc7d517.slice - libcontainer container kubepods-besteffort-pode45aa07e_b704_4261_8c52_1b2e1dc7d517.slice. Jul 2 09:08:53.099241 kubelet[2517]: I0702 09:08:53.099207 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr2r7\" (UniqueName: \"kubernetes.io/projected/e45aa07e-b704-4261-8c52-1b2e1dc7d517-kube-api-access-qr2r7\") pod \"tigera-operator-76c4974c85-889s4\" (UID: \"e45aa07e-b704-4261-8c52-1b2e1dc7d517\") " pod="tigera-operator/tigera-operator-76c4974c85-889s4" Jul 2 09:08:53.099320 kubelet[2517]: I0702 09:08:53.099266 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e45aa07e-b704-4261-8c52-1b2e1dc7d517-var-lib-calico\") pod \"tigera-operator-76c4974c85-889s4\" (UID: \"e45aa07e-b704-4261-8c52-1b2e1dc7d517\") " pod="tigera-operator/tigera-operator-76c4974c85-889s4" Jul 2 09:08:53.249110 kubelet[2517]: E0702 09:08:53.248984 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:53.249650 containerd[1436]: time="2024-07-02T09:08:53.249593920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-649zf,Uid:e834f16a-e537-491d-aaa6-36885172f428,Namespace:kube-system,Attempt:0,}" Jul 2 09:08:53.276961 containerd[1436]: time="2024-07-02T09:08:53.276417853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:08:53.276961 containerd[1436]: time="2024-07-02T09:08:53.276809527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:53.276961 containerd[1436]: time="2024-07-02T09:08:53.276828811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:08:53.276961 containerd[1436]: time="2024-07-02T09:08:53.276852175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:53.297228 systemd[1]: Started cri-containerd-2a9f74fa6212a2f3452c985e1efbc09d3e9f7554e987dfb1e0454f904f5acba1.scope - libcontainer container 2a9f74fa6212a2f3452c985e1efbc09d3e9f7554e987dfb1e0454f904f5acba1. Jul 2 09:08:53.314972 containerd[1436]: time="2024-07-02T09:08:53.314929525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-649zf,Uid:e834f16a-e537-491d-aaa6-36885172f428,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a9f74fa6212a2f3452c985e1efbc09d3e9f7554e987dfb1e0454f904f5acba1\"" Jul 2 09:08:53.315676 kubelet[2517]: E0702 09:08:53.315650 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:53.317579 containerd[1436]: time="2024-07-02T09:08:53.317549783Z" level=info msg="CreateContainer within sandbox \"2a9f74fa6212a2f3452c985e1efbc09d3e9f7554e987dfb1e0454f904f5acba1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 09:08:53.332780 containerd[1436]: time="2024-07-02T09:08:53.332654010Z" level=info msg="CreateContainer within sandbox \"2a9f74fa6212a2f3452c985e1efbc09d3e9f7554e987dfb1e0454f904f5acba1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80043308caa5fc78a496d8472eb618e3c99a14d04336c83eb11e8bdc41defbb0\"" Jul 2 09:08:53.333399 containerd[1436]: time="2024-07-02T09:08:53.333375707Z" level=info msg="StartContainer for \"80043308caa5fc78a496d8472eb618e3c99a14d04336c83eb11e8bdc41defbb0\"" Jul 2 09:08:53.359234 systemd[1]: Started cri-containerd-80043308caa5fc78a496d8472eb618e3c99a14d04336c83eb11e8bdc41defbb0.scope - libcontainer container 80043308caa5fc78a496d8472eb618e3c99a14d04336c83eb11e8bdc41defbb0. Jul 2 09:08:53.381511 containerd[1436]: time="2024-07-02T09:08:53.381459797Z" level=info msg="StartContainer for \"80043308caa5fc78a496d8472eb618e3c99a14d04336c83eb11e8bdc41defbb0\" returns successfully" Jul 2 09:08:53.398864 containerd[1436]: time="2024-07-02T09:08:53.398822053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-889s4,Uid:e45aa07e-b704-4261-8c52-1b2e1dc7d517,Namespace:tigera-operator,Attempt:0,}" Jul 2 09:08:53.423713 containerd[1436]: time="2024-07-02T09:08:53.423503860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:08:53.423713 containerd[1436]: time="2024-07-02T09:08:53.423565951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:53.423713 containerd[1436]: time="2024-07-02T09:08:53.423590356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:08:53.423713 containerd[1436]: time="2024-07-02T09:08:53.423606199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:08:53.443646 kubelet[2517]: E0702 09:08:53.443207 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:08:53.443711 systemd[1]: Started cri-containerd-36f42a80f2d1a384c8f99407c0851b7679d8ff9c986f5740e0d559e3dc59edf1.scope - libcontainer container 36f42a80f2d1a384c8f99407c0851b7679d8ff9c986f5740e0d559e3dc59edf1. Jul 2 09:08:53.486355 containerd[1436]: time="2024-07-02T09:08:53.482113628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-889s4,Uid:e45aa07e-b704-4261-8c52-1b2e1dc7d517,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"36f42a80f2d1a384c8f99407c0851b7679d8ff9c986f5740e0d559e3dc59edf1\"" Jul 2 09:08:53.492405 containerd[1436]: time="2024-07-02T09:08:53.492321846Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 09:08:54.528490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3851866749.mount: Deactivated successfully. Jul 2 09:08:55.296771 containerd[1436]: time="2024-07-02T09:08:55.296186521Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:55.296771 containerd[1436]: time="2024-07-02T09:08:55.296578188Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473662" Jul 2 09:08:55.297411 containerd[1436]: time="2024-07-02T09:08:55.297361644Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:55.299839 containerd[1436]: time="2024-07-02T09:08:55.299804746Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:08:55.300612 containerd[1436]: time="2024-07-02T09:08:55.300495145Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.80812277s" Jul 2 09:08:55.300612 containerd[1436]: time="2024-07-02T09:08:55.300529511Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 09:08:55.303171 containerd[1436]: time="2024-07-02T09:08:55.303121199Z" level=info msg="CreateContainer within sandbox \"36f42a80f2d1a384c8f99407c0851b7679d8ff9c986f5740e0d559e3dc59edf1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 09:08:55.314160 containerd[1436]: time="2024-07-02T09:08:55.314121861Z" level=info msg="CreateContainer within sandbox \"36f42a80f2d1a384c8f99407c0851b7679d8ff9c986f5740e0d559e3dc59edf1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"36538df4548ab47d06c2561fa3542094dcbd30b2745c90b6e160d90c5934ea93\"" Jul 2 09:08:55.314826 containerd[1436]: time="2024-07-02T09:08:55.314798178Z" level=info msg="StartContainer for \"36538df4548ab47d06c2561fa3542094dcbd30b2745c90b6e160d90c5934ea93\"" Jul 2 09:08:55.348259 systemd[1]: Started cri-containerd-36538df4548ab47d06c2561fa3542094dcbd30b2745c90b6e160d90c5934ea93.scope - libcontainer container 36538df4548ab47d06c2561fa3542094dcbd30b2745c90b6e160d90c5934ea93. Jul 2 09:08:55.372893 containerd[1436]: time="2024-07-02T09:08:55.371362995Z" level=info msg="StartContainer for \"36538df4548ab47d06c2561fa3542094dcbd30b2745c90b6e160d90c5934ea93\" returns successfully" Jul 2 09:08:55.457088 kubelet[2517]: I0702 09:08:55.456804 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-649zf" podStartSLOduration=3.456765837 podStartE2EDuration="3.456765837s" podCreationTimestamp="2024-07-02 09:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:08:53.455765585 +0000 UTC m=+16.135137562" watchObservedRunningTime="2024-07-02 09:08:55.456765837 +0000 UTC m=+18.136137774" Jul 2 09:08:57.416627 kubelet[2517]: I0702 09:08:57.416572 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-889s4" podStartSLOduration=2.6073639330000002 podStartE2EDuration="4.416507571s" podCreationTimestamp="2024-07-02 09:08:53 +0000 UTC" firstStartedPulling="2024-07-02 09:08:53.491689126 +0000 UTC m=+16.171061103" lastFinishedPulling="2024-07-02 09:08:55.300832764 +0000 UTC m=+17.980204741" observedRunningTime="2024-07-02 09:08:55.45759046 +0000 UTC m=+18.136962437" watchObservedRunningTime="2024-07-02 09:08:57.416507571 +0000 UTC m=+20.095879548" Jul 2 09:09:00.107329 kubelet[2517]: I0702 09:09:00.106922 2517 topology_manager.go:215] "Topology Admit Handler" podUID="68b36038-b199-47ed-8ec6-ec431621bf31" podNamespace="calico-system" podName="calico-typha-8464fb4567-q7dmk" Jul 2 09:09:00.124276 systemd[1]: Created slice kubepods-besteffort-pod68b36038_b199_47ed_8ec6_ec431621bf31.slice - libcontainer container kubepods-besteffort-pod68b36038_b199_47ed_8ec6_ec431621bf31.slice. Jul 2 09:09:00.152648 kubelet[2517]: I0702 09:09:00.152522 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/68b36038-b199-47ed-8ec6-ec431621bf31-typha-certs\") pod \"calico-typha-8464fb4567-q7dmk\" (UID: \"68b36038-b199-47ed-8ec6-ec431621bf31\") " pod="calico-system/calico-typha-8464fb4567-q7dmk" Jul 2 09:09:00.152648 kubelet[2517]: I0702 09:09:00.152570 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mvz\" (UniqueName: \"kubernetes.io/projected/68b36038-b199-47ed-8ec6-ec431621bf31-kube-api-access-76mvz\") pod \"calico-typha-8464fb4567-q7dmk\" (UID: \"68b36038-b199-47ed-8ec6-ec431621bf31\") " pod="calico-system/calico-typha-8464fb4567-q7dmk" Jul 2 09:09:00.152648 kubelet[2517]: I0702 09:09:00.152598 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68b36038-b199-47ed-8ec6-ec431621bf31-tigera-ca-bundle\") pod \"calico-typha-8464fb4567-q7dmk\" (UID: \"68b36038-b199-47ed-8ec6-ec431621bf31\") " pod="calico-system/calico-typha-8464fb4567-q7dmk" Jul 2 09:09:00.157696 kubelet[2517]: I0702 09:09:00.157663 2517 topology_manager.go:215] "Topology Admit Handler" podUID="c7abbfa5-4e20-4f97-9581-b1f4a0f39897" podNamespace="calico-system" podName="calico-node-nc9dw" Jul 2 09:09:00.166446 systemd[1]: Created slice kubepods-besteffort-podc7abbfa5_4e20_4f97_9581_b1f4a0f39897.slice - libcontainer container kubepods-besteffort-podc7abbfa5_4e20_4f97_9581_b1f4a0f39897.slice. Jul 2 09:09:00.253599 kubelet[2517]: I0702 09:09:00.253493 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-policysync\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.253599 kubelet[2517]: I0702 09:09:00.253538 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-xtables-lock\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.253599 kubelet[2517]: I0702 09:09:00.253578 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-var-lib-calico\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254381 kubelet[2517]: I0702 09:09:00.253665 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-flexvol-driver-host\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254381 kubelet[2517]: I0702 09:09:00.253725 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-cni-bin-dir\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254381 kubelet[2517]: I0702 09:09:00.253787 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-lib-modules\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254381 kubelet[2517]: I0702 09:09:00.254287 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-var-run-calico\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254381 kubelet[2517]: I0702 09:09:00.254315 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-cni-log-dir\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254492 kubelet[2517]: I0702 09:09:00.254336 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lnzc\" (UniqueName: \"kubernetes.io/projected/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-kube-api-access-7lnzc\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254492 kubelet[2517]: I0702 09:09:00.254359 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-tigera-ca-bundle\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254492 kubelet[2517]: I0702 09:09:00.254379 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-node-certs\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.254492 kubelet[2517]: I0702 09:09:00.254402 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c7abbfa5-4e20-4f97-9581-b1f4a0f39897-cni-net-dir\") pod \"calico-node-nc9dw\" (UID: \"c7abbfa5-4e20-4f97-9581-b1f4a0f39897\") " pod="calico-system/calico-node-nc9dw" Jul 2 09:09:00.275448 kubelet[2517]: I0702 09:09:00.275394 2517 topology_manager.go:215] "Topology Admit Handler" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" podNamespace="calico-system" podName="csi-node-driver-dvp65" Jul 2 09:09:00.276132 kubelet[2517]: E0702 09:09:00.276100 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:00.354865 kubelet[2517]: I0702 09:09:00.354827 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/31c1037b-708e-482e-8198-19d0b4cbcaf3-kubelet-dir\") pod \"csi-node-driver-dvp65\" (UID: \"31c1037b-708e-482e-8198-19d0b4cbcaf3\") " pod="calico-system/csi-node-driver-dvp65" Jul 2 09:09:00.355024 kubelet[2517]: I0702 09:09:00.354879 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/31c1037b-708e-482e-8198-19d0b4cbcaf3-registration-dir\") pod \"csi-node-driver-dvp65\" (UID: \"31c1037b-708e-482e-8198-19d0b4cbcaf3\") " pod="calico-system/csi-node-driver-dvp65" Jul 2 09:09:00.355024 kubelet[2517]: I0702 09:09:00.354930 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72sxk\" (UniqueName: \"kubernetes.io/projected/31c1037b-708e-482e-8198-19d0b4cbcaf3-kube-api-access-72sxk\") pod \"csi-node-driver-dvp65\" (UID: \"31c1037b-708e-482e-8198-19d0b4cbcaf3\") " pod="calico-system/csi-node-driver-dvp65" Jul 2 09:09:00.355024 kubelet[2517]: I0702 09:09:00.354975 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/31c1037b-708e-482e-8198-19d0b4cbcaf3-varrun\") pod \"csi-node-driver-dvp65\" (UID: \"31c1037b-708e-482e-8198-19d0b4cbcaf3\") " pod="calico-system/csi-node-driver-dvp65" Jul 2 09:09:00.355024 kubelet[2517]: I0702 09:09:00.355000 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/31c1037b-708e-482e-8198-19d0b4cbcaf3-socket-dir\") pod \"csi-node-driver-dvp65\" (UID: \"31c1037b-708e-482e-8198-19d0b4cbcaf3\") " pod="calico-system/csi-node-driver-dvp65" Jul 2 09:09:00.357379 kubelet[2517]: E0702 09:09:00.357230 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.357379 kubelet[2517]: W0702 09:09:00.357264 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.357379 kubelet[2517]: E0702 09:09:00.357283 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.357714 kubelet[2517]: E0702 09:09:00.357624 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.357714 kubelet[2517]: W0702 09:09:00.357636 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.357714 kubelet[2517]: E0702 09:09:00.357649 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.357908 kubelet[2517]: E0702 09:09:00.357886 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.358151 kubelet[2517]: W0702 09:09:00.357996 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.358151 kubelet[2517]: E0702 09:09:00.358018 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.358363 kubelet[2517]: E0702 09:09:00.358351 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.358435 kubelet[2517]: W0702 09:09:00.358424 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.358492 kubelet[2517]: E0702 09:09:00.358477 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.362473 kubelet[2517]: E0702 09:09:00.362448 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.362532 kubelet[2517]: W0702 09:09:00.362466 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.362532 kubelet[2517]: E0702 09:09:00.362498 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.369892 kubelet[2517]: E0702 09:09:00.369870 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.369892 kubelet[2517]: W0702 09:09:00.369887 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.369992 kubelet[2517]: E0702 09:09:00.369903 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.428356 kubelet[2517]: E0702 09:09:00.428319 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:00.430648 containerd[1436]: time="2024-07-02T09:09:00.430422167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8464fb4567-q7dmk,Uid:68b36038-b199-47ed-8ec6-ec431621bf31,Namespace:calico-system,Attempt:0,}" Jul 2 09:09:00.453141 containerd[1436]: time="2024-07-02T09:09:00.453036266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:09:00.453529 containerd[1436]: time="2024-07-02T09:09:00.453132119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:00.453529 containerd[1436]: time="2024-07-02T09:09:00.453154922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:09:00.453529 containerd[1436]: time="2024-07-02T09:09:00.453168244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:00.456756 kubelet[2517]: E0702 09:09:00.456589 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.456756 kubelet[2517]: W0702 09:09:00.456641 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.456756 kubelet[2517]: E0702 09:09:00.456671 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.460290 kubelet[2517]: E0702 09:09:00.458436 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.460290 kubelet[2517]: W0702 09:09:00.458447 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.460290 kubelet[2517]: E0702 09:09:00.458473 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.460290 kubelet[2517]: E0702 09:09:00.458716 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.460290 kubelet[2517]: W0702 09:09:00.458727 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.460290 kubelet[2517]: E0702 09:09:00.458741 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.460290 kubelet[2517]: E0702 09:09:00.458957 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.460290 kubelet[2517]: W0702 09:09:00.458969 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.460290 kubelet[2517]: E0702 09:09:00.458982 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.460290 kubelet[2517]: E0702 09:09:00.459184 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461366 kubelet[2517]: W0702 09:09:00.459196 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461366 kubelet[2517]: E0702 09:09:00.459216 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.461366 kubelet[2517]: E0702 09:09:00.459352 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461366 kubelet[2517]: W0702 09:09:00.459359 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461366 kubelet[2517]: E0702 09:09:00.459414 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.461366 kubelet[2517]: E0702 09:09:00.459529 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461366 kubelet[2517]: W0702 09:09:00.459538 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461366 kubelet[2517]: E0702 09:09:00.459580 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.461366 kubelet[2517]: E0702 09:09:00.459687 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461366 kubelet[2517]: W0702 09:09:00.459694 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461559 kubelet[2517]: E0702 09:09:00.459707 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.461559 kubelet[2517]: E0702 09:09:00.459854 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461559 kubelet[2517]: W0702 09:09:00.459862 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461559 kubelet[2517]: E0702 09:09:00.459874 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.461559 kubelet[2517]: E0702 09:09:00.460022 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461559 kubelet[2517]: W0702 09:09:00.460030 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461559 kubelet[2517]: E0702 09:09:00.460041 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.461559 kubelet[2517]: E0702 09:09:00.460250 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461559 kubelet[2517]: W0702 09:09:00.460260 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461559 kubelet[2517]: E0702 09:09:00.460294 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.461744 kubelet[2517]: E0702 09:09:00.461184 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.461744 kubelet[2517]: W0702 09:09:00.461197 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.461744 kubelet[2517]: E0702 09:09:00.461241 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.464358 kubelet[2517]: E0702 09:09:00.464333 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.464358 kubelet[2517]: W0702 09:09:00.464349 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.464710 kubelet[2517]: E0702 09:09:00.464406 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.464710 kubelet[2517]: E0702 09:09:00.464556 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.464710 kubelet[2517]: W0702 09:09:00.464565 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.464710 kubelet[2517]: E0702 09:09:00.464607 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.465088 kubelet[2517]: E0702 09:09:00.464751 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.465088 kubelet[2517]: W0702 09:09:00.464763 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.465088 kubelet[2517]: E0702 09:09:00.464806 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.465088 kubelet[2517]: E0702 09:09:00.464906 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.465088 kubelet[2517]: W0702 09:09:00.464914 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.465088 kubelet[2517]: E0702 09:09:00.464945 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.465088 kubelet[2517]: E0702 09:09:00.465086 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.465088 kubelet[2517]: W0702 09:09:00.465096 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.465389 kubelet[2517]: E0702 09:09:00.465114 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.465389 kubelet[2517]: E0702 09:09:00.465257 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.465389 kubelet[2517]: W0702 09:09:00.465265 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.465389 kubelet[2517]: E0702 09:09:00.465342 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.465816 kubelet[2517]: E0702 09:09:00.465798 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.465816 kubelet[2517]: W0702 09:09:00.465812 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.465915 kubelet[2517]: E0702 09:09:00.465829 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.466223 kubelet[2517]: E0702 09:09:00.466069 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.466223 kubelet[2517]: W0702 09:09:00.466090 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.466223 kubelet[2517]: E0702 09:09:00.466108 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.466346 kubelet[2517]: E0702 09:09:00.466309 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.466346 kubelet[2517]: W0702 09:09:00.466319 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.466346 kubelet[2517]: E0702 09:09:00.466331 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.466780 kubelet[2517]: E0702 09:09:00.466735 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.466780 kubelet[2517]: W0702 09:09:00.466749 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.466861 kubelet[2517]: E0702 09:09:00.466783 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.467120 kubelet[2517]: E0702 09:09:00.466930 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.467120 kubelet[2517]: W0702 09:09:00.466948 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.467120 kubelet[2517]: E0702 09:09:00.466959 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.467798 kubelet[2517]: E0702 09:09:00.467262 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.467798 kubelet[2517]: W0702 09:09:00.467277 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.467798 kubelet[2517]: E0702 09:09:00.467290 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.467965 kubelet[2517]: E0702 09:09:00.467865 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.467965 kubelet[2517]: W0702 09:09:00.467877 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.467965 kubelet[2517]: E0702 09:09:00.467890 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.469819 kubelet[2517]: E0702 09:09:00.469799 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:00.471398 containerd[1436]: time="2024-07-02T09:09:00.470290380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nc9dw,Uid:c7abbfa5-4e20-4f97-9581-b1f4a0f39897,Namespace:calico-system,Attempt:0,}" Jul 2 09:09:00.474078 kubelet[2517]: E0702 09:09:00.473988 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:00.474180 kubelet[2517]: W0702 09:09:00.474083 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:00.474180 kubelet[2517]: E0702 09:09:00.474103 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:00.481228 systemd[1]: Started cri-containerd-c8ba41efd2f65c0e834bd6cc8dd37f7e455104ec961938c3e0fee3f052e2d000.scope - libcontainer container c8ba41efd2f65c0e834bd6cc8dd37f7e455104ec961938c3e0fee3f052e2d000. Jul 2 09:09:00.504578 containerd[1436]: time="2024-07-02T09:09:00.504473445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:09:00.504578 containerd[1436]: time="2024-07-02T09:09:00.504537694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:00.504766 containerd[1436]: time="2024-07-02T09:09:00.504557296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:09:00.504766 containerd[1436]: time="2024-07-02T09:09:00.504576859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:00.524553 containerd[1436]: time="2024-07-02T09:09:00.523654627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8464fb4567-q7dmk,Uid:68b36038-b199-47ed-8ec6-ec431621bf31,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8ba41efd2f65c0e834bd6cc8dd37f7e455104ec961938c3e0fee3f052e2d000\"" Jul 2 09:09:00.524685 kubelet[2517]: E0702 09:09:00.524218 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:00.525696 containerd[1436]: time="2024-07-02T09:09:00.525506204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 09:09:00.528233 systemd[1]: Started cri-containerd-66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a.scope - libcontainer container 66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a. Jul 2 09:09:00.550654 containerd[1436]: time="2024-07-02T09:09:00.550599206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nc9dw,Uid:c7abbfa5-4e20-4f97-9581-b1f4a0f39897,Namespace:calico-system,Attempt:0,} returns sandbox id \"66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a\"" Jul 2 09:09:00.551267 kubelet[2517]: E0702 09:09:00.551247 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:01.405537 kubelet[2517]: E0702 09:09:01.405489 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:03.079463 containerd[1436]: time="2024-07-02T09:09:03.079088163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:03.079834 containerd[1436]: time="2024-07-02T09:09:03.079527697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 09:09:03.080480 containerd[1436]: time="2024-07-02T09:09:03.080442010Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:03.084433 containerd[1436]: time="2024-07-02T09:09:03.084267320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.55777642s" Jul 2 09:09:03.084433 containerd[1436]: time="2024-07-02T09:09:03.084317526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 09:09:03.085073 containerd[1436]: time="2024-07-02T09:09:03.084766062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:03.086929 containerd[1436]: time="2024-07-02T09:09:03.086892443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 09:09:03.094969 containerd[1436]: time="2024-07-02T09:09:03.094922351Z" level=info msg="CreateContainer within sandbox \"c8ba41efd2f65c0e834bd6cc8dd37f7e455104ec961938c3e0fee3f052e2d000\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 09:09:03.107026 containerd[1436]: time="2024-07-02T09:09:03.106984115Z" level=info msg="CreateContainer within sandbox \"c8ba41efd2f65c0e834bd6cc8dd37f7e455104ec961938c3e0fee3f052e2d000\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4f503db80a007ee0c55ec6c31e93b85b9cd10534bf876fe40819b19c3a8b7522\"" Jul 2 09:09:03.108558 containerd[1436]: time="2024-07-02T09:09:03.107542664Z" level=info msg="StartContainer for \"4f503db80a007ee0c55ec6c31e93b85b9cd10534bf876fe40819b19c3a8b7522\"" Jul 2 09:09:03.137193 systemd[1]: Started cri-containerd-4f503db80a007ee0c55ec6c31e93b85b9cd10534bf876fe40819b19c3a8b7522.scope - libcontainer container 4f503db80a007ee0c55ec6c31e93b85b9cd10534bf876fe40819b19c3a8b7522. Jul 2 09:09:03.174860 containerd[1436]: time="2024-07-02T09:09:03.173243988Z" level=info msg="StartContainer for \"4f503db80a007ee0c55ec6c31e93b85b9cd10534bf876fe40819b19c3a8b7522\" returns successfully" Jul 2 09:09:03.408104 kubelet[2517]: E0702 09:09:03.407239 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:03.469099 kubelet[2517]: E0702 09:09:03.469060 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:03.478531 kubelet[2517]: I0702 09:09:03.478389 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8464fb4567-q7dmk" podStartSLOduration=0.917925238 podStartE2EDuration="3.478329208s" podCreationTimestamp="2024-07-02 09:09:00 +0000 UTC" firstStartedPulling="2024-07-02 09:09:00.524772582 +0000 UTC m=+23.204144519" lastFinishedPulling="2024-07-02 09:09:03.085176512 +0000 UTC m=+25.764548489" observedRunningTime="2024-07-02 09:09:03.477157224 +0000 UTC m=+26.156529201" watchObservedRunningTime="2024-07-02 09:09:03.478329208 +0000 UTC m=+26.157701225" Jul 2 09:09:03.563082 kubelet[2517]: E0702 09:09:03.563042 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.563082 kubelet[2517]: W0702 09:09:03.563076 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.563325 kubelet[2517]: E0702 09:09:03.563097 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.563382 kubelet[2517]: E0702 09:09:03.563325 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.563382 kubelet[2517]: W0702 09:09:03.563335 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.563382 kubelet[2517]: E0702 09:09:03.563374 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.563736 kubelet[2517]: E0702 09:09:03.563713 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.563736 kubelet[2517]: W0702 09:09:03.563727 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.563848 kubelet[2517]: E0702 09:09:03.563742 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.563948 kubelet[2517]: E0702 09:09:03.563933 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.563948 kubelet[2517]: W0702 09:09:03.563944 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.564140 kubelet[2517]: E0702 09:09:03.563958 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.564140 kubelet[2517]: E0702 09:09:03.564157 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.564140 kubelet[2517]: W0702 09:09:03.564169 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.564140 kubelet[2517]: E0702 09:09:03.564179 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.564413 kubelet[2517]: E0702 09:09:03.564400 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.564413 kubelet[2517]: W0702 09:09:03.564411 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.564473 kubelet[2517]: E0702 09:09:03.564423 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.564599 kubelet[2517]: E0702 09:09:03.564588 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.564599 kubelet[2517]: W0702 09:09:03.564601 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.564663 kubelet[2517]: E0702 09:09:03.564615 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.564789 kubelet[2517]: E0702 09:09:03.564780 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.564789 kubelet[2517]: W0702 09:09:03.564789 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.564847 kubelet[2517]: E0702 09:09:03.564798 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.564964 kubelet[2517]: E0702 09:09:03.564955 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.564964 kubelet[2517]: W0702 09:09:03.564964 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.565018 kubelet[2517]: E0702 09:09:03.564973 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.565113 kubelet[2517]: E0702 09:09:03.565103 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.565113 kubelet[2517]: W0702 09:09:03.565112 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.565188 kubelet[2517]: E0702 09:09:03.565126 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.565276 kubelet[2517]: E0702 09:09:03.565232 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.565276 kubelet[2517]: W0702 09:09:03.565246 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.565276 kubelet[2517]: E0702 09:09:03.565256 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.565668 kubelet[2517]: E0702 09:09:03.565386 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.565668 kubelet[2517]: W0702 09:09:03.565395 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.565668 kubelet[2517]: E0702 09:09:03.565406 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.565668 kubelet[2517]: E0702 09:09:03.565565 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.565668 kubelet[2517]: W0702 09:09:03.565572 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.565668 kubelet[2517]: E0702 09:09:03.565583 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.565809 kubelet[2517]: E0702 09:09:03.565748 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.565809 kubelet[2517]: W0702 09:09:03.565754 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.565809 kubelet[2517]: E0702 09:09:03.565763 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.565904 kubelet[2517]: E0702 09:09:03.565894 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.565904 kubelet[2517]: W0702 09:09:03.565904 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.565955 kubelet[2517]: E0702 09:09:03.565913 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.582425 kubelet[2517]: E0702 09:09:03.582364 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.582425 kubelet[2517]: W0702 09:09:03.582384 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.582425 kubelet[2517]: E0702 09:09:03.582399 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.582654 kubelet[2517]: E0702 09:09:03.582643 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.582654 kubelet[2517]: W0702 09:09:03.582654 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.582731 kubelet[2517]: E0702 09:09:03.582672 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.582867 kubelet[2517]: E0702 09:09:03.582857 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.582899 kubelet[2517]: W0702 09:09:03.582868 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.582899 kubelet[2517]: E0702 09:09:03.582883 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.583071 kubelet[2517]: E0702 09:09:03.583045 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.583100 kubelet[2517]: W0702 09:09:03.583071 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.583100 kubelet[2517]: E0702 09:09:03.583086 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.583288 kubelet[2517]: E0702 09:09:03.583278 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.583326 kubelet[2517]: W0702 09:09:03.583288 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.583326 kubelet[2517]: E0702 09:09:03.583311 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.583712 kubelet[2517]: E0702 09:09:03.583700 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.583751 kubelet[2517]: W0702 09:09:03.583712 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.583751 kubelet[2517]: E0702 09:09:03.583731 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.583924 kubelet[2517]: E0702 09:09:03.583913 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.583924 kubelet[2517]: W0702 09:09:03.583923 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.583987 kubelet[2517]: E0702 09:09:03.583935 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.584136 kubelet[2517]: E0702 09:09:03.584122 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.584136 kubelet[2517]: W0702 09:09:03.584134 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.584186 kubelet[2517]: E0702 09:09:03.584148 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.584304 kubelet[2517]: E0702 09:09:03.584294 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.584304 kubelet[2517]: W0702 09:09:03.584304 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.584439 kubelet[2517]: E0702 09:09:03.584314 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.584474 kubelet[2517]: E0702 09:09:03.584458 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.584474 kubelet[2517]: W0702 09:09:03.584465 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.584474 kubelet[2517]: E0702 09:09:03.584480 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.584650 kubelet[2517]: E0702 09:09:03.584637 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.584650 kubelet[2517]: W0702 09:09:03.584648 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.584707 kubelet[2517]: E0702 09:09:03.584663 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.584904 kubelet[2517]: E0702 09:09:03.584890 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.584904 kubelet[2517]: W0702 09:09:03.584903 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.584961 kubelet[2517]: E0702 09:09:03.584921 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.585116 kubelet[2517]: E0702 09:09:03.585107 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.585116 kubelet[2517]: W0702 09:09:03.585116 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.585183 kubelet[2517]: E0702 09:09:03.585130 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.585271 kubelet[2517]: E0702 09:09:03.585262 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.585271 kubelet[2517]: W0702 09:09:03.585270 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.585328 kubelet[2517]: E0702 09:09:03.585283 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.585419 kubelet[2517]: E0702 09:09:03.585410 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.585419 kubelet[2517]: W0702 09:09:03.585418 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.585476 kubelet[2517]: E0702 09:09:03.585431 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.585579 kubelet[2517]: E0702 09:09:03.585566 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.585579 kubelet[2517]: W0702 09:09:03.585574 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.585628 kubelet[2517]: E0702 09:09:03.585590 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.585816 kubelet[2517]: E0702 09:09:03.585802 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.585849 kubelet[2517]: W0702 09:09:03.585817 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.585849 kubelet[2517]: E0702 09:09:03.585834 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:03.585993 kubelet[2517]: E0702 09:09:03.585982 2517 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 09:09:03.585993 kubelet[2517]: W0702 09:09:03.585993 2517 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 09:09:03.586040 kubelet[2517]: E0702 09:09:03.586004 2517 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 09:09:04.089563 systemd[1]: run-containerd-runc-k8s.io-4f503db80a007ee0c55ec6c31e93b85b9cd10534bf876fe40819b19c3a8b7522-runc.bXwpqA.mount: Deactivated successfully. Jul 2 09:09:04.376131 containerd[1436]: time="2024-07-02T09:09:04.375996816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:04.376503 containerd[1436]: time="2024-07-02T09:09:04.376456870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 09:09:04.377138 containerd[1436]: time="2024-07-02T09:09:04.377102107Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:04.379006 containerd[1436]: time="2024-07-02T09:09:04.378974808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:04.380445 containerd[1436]: time="2024-07-02T09:09:04.380398977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.293456087s" Jul 2 09:09:04.380498 containerd[1436]: time="2024-07-02T09:09:04.380442302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 09:09:04.382997 containerd[1436]: time="2024-07-02T09:09:04.382948279Z" level=info msg="CreateContainer within sandbox \"66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 09:09:04.394662 containerd[1436]: time="2024-07-02T09:09:04.394616221Z" level=info msg="CreateContainer within sandbox \"66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d\"" Jul 2 09:09:04.395079 containerd[1436]: time="2024-07-02T09:09:04.395019749Z" level=info msg="StartContainer for \"a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d\"" Jul 2 09:09:04.430211 systemd[1]: Started cri-containerd-a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d.scope - libcontainer container a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d. Jul 2 09:09:04.459207 containerd[1436]: time="2024-07-02T09:09:04.459058494Z" level=info msg="StartContainer for \"a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d\" returns successfully" Jul 2 09:09:04.474615 kubelet[2517]: I0702 09:09:04.474559 2517 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:09:04.474917 kubelet[2517]: E0702 09:09:04.474809 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:04.475170 kubelet[2517]: E0702 09:09:04.475140 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:04.504215 systemd[1]: cri-containerd-a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d.scope: Deactivated successfully. Jul 2 09:09:04.712942 containerd[1436]: time="2024-07-02T09:09:04.712808028Z" level=info msg="shim disconnected" id=a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d namespace=k8s.io Jul 2 09:09:04.712942 containerd[1436]: time="2024-07-02T09:09:04.712864434Z" level=warning msg="cleaning up after shim disconnected" id=a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d namespace=k8s.io Jul 2 09:09:04.712942 containerd[1436]: time="2024-07-02T09:09:04.712873795Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:09:05.089614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a89eaf200833cb674f7de7de00657ce31ebf26c6115d0ae594378dcbbd81c35d-rootfs.mount: Deactivated successfully. Jul 2 09:09:05.406457 kubelet[2517]: E0702 09:09:05.406307 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:05.478373 kubelet[2517]: E0702 09:09:05.477975 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:05.479383 containerd[1436]: time="2024-07-02T09:09:05.478758124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 09:09:07.405417 kubelet[2517]: E0702 09:09:07.405368 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:08.798174 containerd[1436]: time="2024-07-02T09:09:08.798117513Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:08.799084 containerd[1436]: time="2024-07-02T09:09:08.799030087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 09:09:08.799601 containerd[1436]: time="2024-07-02T09:09:08.799554381Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:08.801363 containerd[1436]: time="2024-07-02T09:09:08.801334844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:08.802902 containerd[1436]: time="2024-07-02T09:09:08.802787713Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.323966982s" Jul 2 09:09:08.802902 containerd[1436]: time="2024-07-02T09:09:08.802822996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 09:09:08.806894 containerd[1436]: time="2024-07-02T09:09:08.806762161Z" level=info msg="CreateContainer within sandbox \"66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 09:09:08.818663 containerd[1436]: time="2024-07-02T09:09:08.818624900Z" level=info msg="CreateContainer within sandbox \"66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9\"" Jul 2 09:09:08.819308 containerd[1436]: time="2024-07-02T09:09:08.819039822Z" level=info msg="StartContainer for \"7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9\"" Jul 2 09:09:08.848270 systemd[1]: Started cri-containerd-7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9.scope - libcontainer container 7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9. Jul 2 09:09:08.974400 containerd[1436]: time="2024-07-02T09:09:08.974359496Z" level=info msg="StartContainer for \"7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9\" returns successfully" Jul 2 09:09:09.391391 containerd[1436]: time="2024-07-02T09:09:09.391328302Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 09:09:09.393326 systemd[1]: cri-containerd-7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9.scope: Deactivated successfully. Jul 2 09:09:09.405924 kubelet[2517]: E0702 09:09:09.405883 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:09.418730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9-rootfs.mount: Deactivated successfully. Jul 2 09:09:09.463136 containerd[1436]: time="2024-07-02T09:09:09.463027628Z" level=info msg="shim disconnected" id=7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9 namespace=k8s.io Jul 2 09:09:09.463136 containerd[1436]: time="2024-07-02T09:09:09.463114596Z" level=warning msg="cleaning up after shim disconnected" id=7bd859e6dd1c905d9975612d9251aed6790c4741eaa0cc293c0d74db52e398b9 namespace=k8s.io Jul 2 09:09:09.463136 containerd[1436]: time="2024-07-02T09:09:09.463126438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 09:09:09.487961 kubelet[2517]: I0702 09:09:09.487593 2517 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 09:09:09.491325 kubelet[2517]: E0702 09:09:09.491170 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:09.492152 containerd[1436]: time="2024-07-02T09:09:09.492100277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 09:09:09.511929 kubelet[2517]: I0702 09:09:09.511875 2517 topology_manager.go:215] "Topology Admit Handler" podUID="86a33c17-cda9-4b34-99d8-e954031b3f4d" podNamespace="kube-system" podName="coredns-76f75df574-zr8hg" Jul 2 09:09:09.516530 kubelet[2517]: I0702 09:09:09.516478 2517 topology_manager.go:215] "Topology Admit Handler" podUID="9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb" podNamespace="kube-system" podName="coredns-76f75df574-l4ngm" Jul 2 09:09:09.517519 kubelet[2517]: I0702 09:09:09.517162 2517 topology_manager.go:215] "Topology Admit Handler" podUID="804476dd-f79b-4477-bf83-c65ba06e121f" podNamespace="calico-system" podName="calico-kube-controllers-84c6d665d6-qm2nc" Jul 2 09:09:09.522272 systemd[1]: Created slice kubepods-burstable-pod86a33c17_cda9_4b34_99d8_e954031b3f4d.slice - libcontainer container kubepods-burstable-pod86a33c17_cda9_4b34_99d8_e954031b3f4d.slice. Jul 2 09:09:09.529359 systemd[1]: Created slice kubepods-burstable-pod9d2a366f_a8bd_4418_b8b4_ae8fc8bcb2eb.slice - libcontainer container kubepods-burstable-pod9d2a366f_a8bd_4418_b8b4_ae8fc8bcb2eb.slice. Jul 2 09:09:09.536838 systemd[1]: Created slice kubepods-besteffort-pod804476dd_f79b_4477_bf83_c65ba06e121f.slice - libcontainer container kubepods-besteffort-pod804476dd_f79b_4477_bf83_c65ba06e121f.slice. Jul 2 09:09:09.628612 kubelet[2517]: I0702 09:09:09.628568 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/804476dd-f79b-4477-bf83-c65ba06e121f-tigera-ca-bundle\") pod \"calico-kube-controllers-84c6d665d6-qm2nc\" (UID: \"804476dd-f79b-4477-bf83-c65ba06e121f\") " pod="calico-system/calico-kube-controllers-84c6d665d6-qm2nc" Jul 2 09:09:09.628612 kubelet[2517]: I0702 09:09:09.628620 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhds\" (UniqueName: \"kubernetes.io/projected/9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb-kube-api-access-jxhds\") pod \"coredns-76f75df574-l4ngm\" (UID: \"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb\") " pod="kube-system/coredns-76f75df574-l4ngm" Jul 2 09:09:09.628777 kubelet[2517]: I0702 09:09:09.628647 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86a33c17-cda9-4b34-99d8-e954031b3f4d-config-volume\") pod \"coredns-76f75df574-zr8hg\" (UID: \"86a33c17-cda9-4b34-99d8-e954031b3f4d\") " pod="kube-system/coredns-76f75df574-zr8hg" Jul 2 09:09:09.628777 kubelet[2517]: I0702 09:09:09.628669 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrdb6\" (UniqueName: \"kubernetes.io/projected/86a33c17-cda9-4b34-99d8-e954031b3f4d-kube-api-access-qrdb6\") pod \"coredns-76f75df574-zr8hg\" (UID: \"86a33c17-cda9-4b34-99d8-e954031b3f4d\") " pod="kube-system/coredns-76f75df574-zr8hg" Jul 2 09:09:09.628777 kubelet[2517]: I0702 09:09:09.628693 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb758\" (UniqueName: \"kubernetes.io/projected/804476dd-f79b-4477-bf83-c65ba06e121f-kube-api-access-lb758\") pod \"calico-kube-controllers-84c6d665d6-qm2nc\" (UID: \"804476dd-f79b-4477-bf83-c65ba06e121f\") " pod="calico-system/calico-kube-controllers-84c6d665d6-qm2nc" Jul 2 09:09:09.628777 kubelet[2517]: I0702 09:09:09.628760 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb-config-volume\") pod \"coredns-76f75df574-l4ngm\" (UID: \"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb\") " pod="kube-system/coredns-76f75df574-l4ngm" Jul 2 09:09:09.826604 kubelet[2517]: E0702 09:09:09.826567 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:09.827982 containerd[1436]: time="2024-07-02T09:09:09.827705470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zr8hg,Uid:86a33c17-cda9-4b34-99d8-e954031b3f4d,Namespace:kube-system,Attempt:0,}" Jul 2 09:09:09.833385 kubelet[2517]: E0702 09:09:09.832817 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:09.834881 containerd[1436]: time="2024-07-02T09:09:09.833606097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l4ngm,Uid:9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb,Namespace:kube-system,Attempt:0,}" Jul 2 09:09:09.845758 containerd[1436]: time="2024-07-02T09:09:09.845716900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c6d665d6-qm2nc,Uid:804476dd-f79b-4477-bf83-c65ba06e121f,Namespace:calico-system,Attempt:0,}" Jul 2 09:09:10.084994 containerd[1436]: time="2024-07-02T09:09:10.084494410Z" level=error msg="Failed to destroy network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.085429 containerd[1436]: time="2024-07-02T09:09:10.085296567Z" level=error msg="encountered an error cleaning up failed sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.085500 containerd[1436]: time="2024-07-02T09:09:10.085448982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zr8hg,Uid:86a33c17-cda9-4b34-99d8-e954031b3f4d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.085788 kubelet[2517]: E0702 09:09:10.085767 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.086562 kubelet[2517]: E0702 09:09:10.086154 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zr8hg" Jul 2 09:09:10.086562 kubelet[2517]: E0702 09:09:10.086186 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zr8hg" Jul 2 09:09:10.086562 kubelet[2517]: E0702 09:09:10.086246 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zr8hg_kube-system(86a33c17-cda9-4b34-99d8-e954031b3f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zr8hg_kube-system(86a33c17-cda9-4b34-99d8-e954031b3f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zr8hg" podUID="86a33c17-cda9-4b34-99d8-e954031b3f4d" Jul 2 09:09:10.087513 containerd[1436]: time="2024-07-02T09:09:10.087465456Z" level=error msg="Failed to destroy network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.088080 containerd[1436]: time="2024-07-02T09:09:10.087743523Z" level=error msg="encountered an error cleaning up failed sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.088080 containerd[1436]: time="2024-07-02T09:09:10.087837252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l4ngm,Uid:9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.088201 kubelet[2517]: E0702 09:09:10.088000 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.088201 kubelet[2517]: E0702 09:09:10.088039 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l4ngm" Jul 2 09:09:10.088201 kubelet[2517]: E0702 09:09:10.088087 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l4ngm" Jul 2 09:09:10.088286 kubelet[2517]: E0702 09:09:10.088141 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l4ngm_kube-system(9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l4ngm_kube-system(9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l4ngm" podUID="9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb" Jul 2 09:09:10.088912 containerd[1436]: time="2024-07-02T09:09:10.088845389Z" level=error msg="Failed to destroy network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.090537 containerd[1436]: time="2024-07-02T09:09:10.090316490Z" level=error msg="encountered an error cleaning up failed sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.091186 containerd[1436]: time="2024-07-02T09:09:10.091141690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c6d665d6-qm2nc,Uid:804476dd-f79b-4477-bf83-c65ba06e121f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.091573 kubelet[2517]: E0702 09:09:10.091522 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.091631 kubelet[2517]: E0702 09:09:10.091582 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c6d665d6-qm2nc" Jul 2 09:09:10.091631 kubelet[2517]: E0702 09:09:10.091604 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c6d665d6-qm2nc" Jul 2 09:09:10.091848 kubelet[2517]: E0702 09:09:10.091710 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84c6d665d6-qm2nc_calico-system(804476dd-f79b-4477-bf83-c65ba06e121f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84c6d665d6-qm2nc_calico-system(804476dd-f79b-4477-bf83-c65ba06e121f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84c6d665d6-qm2nc" podUID="804476dd-f79b-4477-bf83-c65ba06e121f" Jul 2 09:09:10.495714 kubelet[2517]: I0702 09:09:10.495677 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:10.497236 containerd[1436]: time="2024-07-02T09:09:10.496765173Z" level=info msg="StopPodSandbox for \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\"" Jul 2 09:09:10.497236 containerd[1436]: time="2024-07-02T09:09:10.496977514Z" level=info msg="Ensure that sandbox 32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503 in task-service has been cleanup successfully" Jul 2 09:09:10.498504 kubelet[2517]: I0702 09:09:10.497510 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:10.498592 containerd[1436]: time="2024-07-02T09:09:10.498093821Z" level=info msg="StopPodSandbox for \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\"" Jul 2 09:09:10.498714 containerd[1436]: time="2024-07-02T09:09:10.498678397Z" level=info msg="Ensure that sandbox 0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3 in task-service has been cleanup successfully" Jul 2 09:09:10.502419 kubelet[2517]: I0702 09:09:10.502392 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:10.502933 containerd[1436]: time="2024-07-02T09:09:10.502904524Z" level=info msg="StopPodSandbox for \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\"" Jul 2 09:09:10.503243 containerd[1436]: time="2024-07-02T09:09:10.503180031Z" level=info msg="Ensure that sandbox 4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4 in task-service has been cleanup successfully" Jul 2 09:09:10.531886 containerd[1436]: time="2024-07-02T09:09:10.531815627Z" level=error msg="StopPodSandbox for \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\" failed" error="failed to destroy network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.532198 kubelet[2517]: E0702 09:09:10.532169 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:10.532261 kubelet[2517]: E0702 09:09:10.532249 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4"} Jul 2 09:09:10.532301 kubelet[2517]: E0702 09:09:10.532289 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"804476dd-f79b-4477-bf83-c65ba06e121f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:09:10.532362 kubelet[2517]: E0702 09:09:10.532320 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"804476dd-f79b-4477-bf83-c65ba06e121f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84c6d665d6-qm2nc" podUID="804476dd-f79b-4477-bf83-c65ba06e121f" Jul 2 09:09:10.536263 containerd[1436]: time="2024-07-02T09:09:10.536222491Z" level=error msg="StopPodSandbox for \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\" failed" error="failed to destroy network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.537077 kubelet[2517]: E0702 09:09:10.536987 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:10.537077 kubelet[2517]: E0702 09:09:10.537047 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503"} Jul 2 09:09:10.537199 kubelet[2517]: E0702 09:09:10.537096 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:09:10.537199 kubelet[2517]: E0702 09:09:10.537121 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l4ngm" podUID="9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb" Jul 2 09:09:10.547943 containerd[1436]: time="2024-07-02T09:09:10.547611147Z" level=error msg="StopPodSandbox for \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\" failed" error="failed to destroy network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:10.548019 kubelet[2517]: E0702 09:09:10.547806 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:10.548019 kubelet[2517]: E0702 09:09:10.547840 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3"} Jul 2 09:09:10.548019 kubelet[2517]: E0702 09:09:10.547883 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86a33c17-cda9-4b34-99d8-e954031b3f4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:09:10.548019 kubelet[2517]: E0702 09:09:10.547910 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86a33c17-cda9-4b34-99d8-e954031b3f4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zr8hg" podUID="86a33c17-cda9-4b34-99d8-e954031b3f4d" Jul 2 09:09:10.816081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503-shm.mount: Deactivated successfully. Jul 2 09:09:10.816189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3-shm.mount: Deactivated successfully. Jul 2 09:09:11.343906 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:38582.service - OpenSSH per-connection server daemon (10.0.0.1:38582). Jul 2 09:09:11.393390 sshd[3459]: Accepted publickey for core from 10.0.0.1 port 38582 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:11.394671 sshd[3459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:11.399742 systemd-logind[1421]: New session 8 of user core. Jul 2 09:09:11.404191 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 09:09:11.413194 systemd[1]: Created slice kubepods-besteffort-pod31c1037b_708e_482e_8198_19d0b4cbcaf3.slice - libcontainer container kubepods-besteffort-pod31c1037b_708e_482e_8198_19d0b4cbcaf3.slice. Jul 2 09:09:11.415490 containerd[1436]: time="2024-07-02T09:09:11.415441106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvp65,Uid:31c1037b-708e-482e-8198-19d0b4cbcaf3,Namespace:calico-system,Attempt:0,}" Jul 2 09:09:11.490491 containerd[1436]: time="2024-07-02T09:09:11.490347336Z" level=error msg="Failed to destroy network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:11.490817 containerd[1436]: time="2024-07-02T09:09:11.490787737Z" level=error msg="encountered an error cleaning up failed sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:11.490934 containerd[1436]: time="2024-07-02T09:09:11.490911509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvp65,Uid:31c1037b-708e-482e-8198-19d0b4cbcaf3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:11.491222 kubelet[2517]: E0702 09:09:11.491188 2517 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:11.491283 kubelet[2517]: E0702 09:09:11.491259 2517 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dvp65" Jul 2 09:09:11.491283 kubelet[2517]: E0702 09:09:11.491282 2517 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dvp65" Jul 2 09:09:11.491339 kubelet[2517]: E0702 09:09:11.491332 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dvp65_calico-system(31c1037b-708e-482e-8198-19d0b4cbcaf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dvp65_calico-system(31c1037b-708e-482e-8198-19d0b4cbcaf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:11.494724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d-shm.mount: Deactivated successfully. Jul 2 09:09:11.506291 kubelet[2517]: I0702 09:09:11.506246 2517 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:11.508809 containerd[1436]: time="2024-07-02T09:09:11.508307172Z" level=info msg="StopPodSandbox for \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\"" Jul 2 09:09:11.508809 containerd[1436]: time="2024-07-02T09:09:11.508557356Z" level=info msg="Ensure that sandbox 2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d in task-service has been cleanup successfully" Jul 2 09:09:11.549254 containerd[1436]: time="2024-07-02T09:09:11.549206589Z" level=error msg="StopPodSandbox for \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\" failed" error="failed to destroy network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 09:09:11.549928 kubelet[2517]: E0702 09:09:11.549891 2517 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:11.550142 kubelet[2517]: E0702 09:09:11.549979 2517 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d"} Jul 2 09:09:11.550142 kubelet[2517]: E0702 09:09:11.550021 2517 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31c1037b-708e-482e-8198-19d0b4cbcaf3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 09:09:11.550142 kubelet[2517]: E0702 09:09:11.550068 2517 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31c1037b-708e-482e-8198-19d0b4cbcaf3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dvp65" podUID="31c1037b-708e-482e-8198-19d0b4cbcaf3" Jul 2 09:09:11.554852 sshd[3459]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:11.559631 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:38582.service: Deactivated successfully. Jul 2 09:09:11.561501 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 09:09:11.563390 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Jul 2 09:09:11.564816 systemd-logind[1421]: Removed session 8. Jul 2 09:09:12.389206 kubelet[2517]: I0702 09:09:12.387100 2517 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 09:09:12.389206 kubelet[2517]: E0702 09:09:12.387703 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:12.508908 kubelet[2517]: E0702 09:09:12.508868 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:13.055614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3608107209.mount: Deactivated successfully. Jul 2 09:09:13.229734 containerd[1436]: time="2024-07-02T09:09:13.229671049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:13.230503 containerd[1436]: time="2024-07-02T09:09:13.230471080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 09:09:13.232508 containerd[1436]: time="2024-07-02T09:09:13.232120865Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:13.234740 containerd[1436]: time="2024-07-02T09:09:13.234667369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:13.235642 containerd[1436]: time="2024-07-02T09:09:13.235178494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.743037653s" Jul 2 09:09:13.235642 containerd[1436]: time="2024-07-02T09:09:13.235211537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 09:09:13.243496 containerd[1436]: time="2024-07-02T09:09:13.243448302Z" level=info msg="CreateContainer within sandbox \"66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 09:09:13.268623 containerd[1436]: time="2024-07-02T09:09:13.268491186Z" level=info msg="CreateContainer within sandbox \"66863acd55dec5359662542e4fe81af4e10d217f7e7122c8907b4447e092559a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a19067ab422ab50f9f041df19b163d53d690ba89b2aed09ccf9f378b8dc94552\"" Jul 2 09:09:13.270334 containerd[1436]: time="2024-07-02T09:09:13.270297024Z" level=info msg="StartContainer for \"a19067ab422ab50f9f041df19b163d53d690ba89b2aed09ccf9f378b8dc94552\"" Jul 2 09:09:13.321275 systemd[1]: Started cri-containerd-a19067ab422ab50f9f041df19b163d53d690ba89b2aed09ccf9f378b8dc94552.scope - libcontainer container a19067ab422ab50f9f041df19b163d53d690ba89b2aed09ccf9f378b8dc94552. Jul 2 09:09:13.439603 containerd[1436]: time="2024-07-02T09:09:13.438350373Z" level=info msg="StartContainer for \"a19067ab422ab50f9f041df19b163d53d690ba89b2aed09ccf9f378b8dc94552\" returns successfully" Jul 2 09:09:13.515525 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 09:09:13.520282 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 09:09:13.520335 kubelet[2517]: E0702 09:09:13.512461 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:14.514796 kubelet[2517]: E0702 09:09:14.514739 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:15.033669 systemd-networkd[1381]: vxlan.calico: Link UP Jul 2 09:09:15.033675 systemd-networkd[1381]: vxlan.calico: Gained carrier Jul 2 09:09:16.183463 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Jul 2 09:09:16.568785 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:38586.service - OpenSSH per-connection server daemon (10.0.0.1:38586). Jul 2 09:09:16.615213 sshd[3843]: Accepted publickey for core from 10.0.0.1 port 38586 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:16.616754 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:16.620841 systemd-logind[1421]: New session 9 of user core. Jul 2 09:09:16.636251 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 09:09:16.768814 sshd[3843]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:16.771367 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:38586.service: Deactivated successfully. Jul 2 09:09:16.773701 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 09:09:16.775140 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Jul 2 09:09:16.776534 systemd-logind[1421]: Removed session 9. Jul 2 09:09:21.406530 containerd[1436]: time="2024-07-02T09:09:21.406461612Z" level=info msg="StopPodSandbox for \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\"" Jul 2 09:09:21.482041 kubelet[2517]: I0702 09:09:21.482000 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-nc9dw" podStartSLOduration=8.798521083 podStartE2EDuration="21.481942157s" podCreationTimestamp="2024-07-02 09:09:00 +0000 UTC" firstStartedPulling="2024-07-02 09:09:00.552025764 +0000 UTC m=+23.231397741" lastFinishedPulling="2024-07-02 09:09:13.235446838 +0000 UTC m=+35.914818815" observedRunningTime="2024-07-02 09:09:13.533135874 +0000 UTC m=+36.212507851" watchObservedRunningTime="2024-07-02 09:09:21.481942157 +0000 UTC m=+44.161314134" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.481 [INFO][3888] k8s.go 608: Cleaning up netns ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.482 [INFO][3888] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" iface="eth0" netns="/var/run/netns/cni-e130b9ef-0cfa-f39e-de93-cfa11be5c832" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.482 [INFO][3888] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" iface="eth0" netns="/var/run/netns/cni-e130b9ef-0cfa-f39e-de93-cfa11be5c832" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.483 [INFO][3888] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" iface="eth0" netns="/var/run/netns/cni-e130b9ef-0cfa-f39e-de93-cfa11be5c832" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.483 [INFO][3888] k8s.go 615: Releasing IP address(es) ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.483 [INFO][3888] utils.go 188: Calico CNI releasing IP address ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.576 [INFO][3897] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.576 [INFO][3897] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.576 [INFO][3897] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.585 [WARNING][3897] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.585 [INFO][3897] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.586 [INFO][3897] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:21.590146 containerd[1436]: 2024-07-02 09:09:21.588 [INFO][3888] k8s.go 621: Teardown processing complete. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:21.590569 containerd[1436]: time="2024-07-02T09:09:21.590307644Z" level=info msg="TearDown network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\" successfully" Jul 2 09:09:21.590569 containerd[1436]: time="2024-07-02T09:09:21.590335766Z" level=info msg="StopPodSandbox for \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\" returns successfully" Jul 2 09:09:21.590710 kubelet[2517]: E0702 09:09:21.590679 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:21.592130 containerd[1436]: time="2024-07-02T09:09:21.592094173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zr8hg,Uid:86a33c17-cda9-4b34-99d8-e954031b3f4d,Namespace:kube-system,Attempt:1,}" Jul 2 09:09:21.592206 systemd[1]: run-netns-cni\x2de130b9ef\x2d0cfa\x2df39e\x2dde93\x2dcfa11be5c832.mount: Deactivated successfully. Jul 2 09:09:21.728695 systemd-networkd[1381]: cali08067ba963f: Link UP Jul 2 09:09:21.729138 systemd-networkd[1381]: cali08067ba963f: Gained carrier Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.647 [INFO][3911] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--zr8hg-eth0 coredns-76f75df574- kube-system 86a33c17-cda9-4b34-99d8-e954031b3f4d 811 0 2024-07-02 09:08:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-zr8hg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali08067ba963f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.647 [INFO][3911] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.674 [INFO][3918] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" HandleID="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.686 [INFO][3918] ipam_plugin.go 264: Auto assigning IP ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" HandleID="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000621c30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-zr8hg", "timestamp":"2024-07-02 09:09:21.674757839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.686 [INFO][3918] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.686 [INFO][3918] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.686 [INFO][3918] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.689 [INFO][3918] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.696 [INFO][3918] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.706 [INFO][3918] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.709 [INFO][3918] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.711 [INFO][3918] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.711 [INFO][3918] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.714 [INFO][3918] ipam.go 1685: Creating new handle: k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01 Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.717 [INFO][3918] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.724 [INFO][3918] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.724 [INFO][3918] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" host="localhost" Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.724 [INFO][3918] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:21.739733 containerd[1436]: 2024-07-02 09:09:21.724 [INFO][3918] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" HandleID="k8s-pod-network.79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.741997 containerd[1436]: 2024-07-02 09:09:21.726 [INFO][3911] k8s.go 386: Populated endpoint ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--zr8hg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86a33c17-cda9-4b34-99d8-e954031b3f4d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-zr8hg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08067ba963f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:21.741997 containerd[1436]: 2024-07-02 09:09:21.727 [INFO][3911] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.741997 containerd[1436]: 2024-07-02 09:09:21.727 [INFO][3911] dataplane_linux.go 68: Setting the host side veth name to cali08067ba963f ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.741997 containerd[1436]: 2024-07-02 09:09:21.729 [INFO][3911] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.741997 containerd[1436]: 2024-07-02 09:09:21.729 [INFO][3911] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--zr8hg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86a33c17-cda9-4b34-99d8-e954031b3f4d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01", Pod:"coredns-76f75df574-zr8hg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08067ba963f", MAC:"92:55:85:4a:5f:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:21.741997 containerd[1436]: 2024-07-02 09:09:21.736 [INFO][3911] k8s.go 500: Wrote updated endpoint to datastore ContainerID="79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01" Namespace="kube-system" Pod="coredns-76f75df574-zr8hg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:21.758629 containerd[1436]: time="2024-07-02T09:09:21.758366813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:09:21.758629 containerd[1436]: time="2024-07-02T09:09:21.758424657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:21.758629 containerd[1436]: time="2024-07-02T09:09:21.758442578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:09:21.758629 containerd[1436]: time="2024-07-02T09:09:21.758455259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:21.781504 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:53148.service - OpenSSH per-connection server daemon (10.0.0.1:53148). Jul 2 09:09:21.799166 systemd[1]: Started cri-containerd-79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01.scope - libcontainer container 79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01. Jul 2 09:09:21.814188 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:09:21.832457 containerd[1436]: time="2024-07-02T09:09:21.832413814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zr8hg,Uid:86a33c17-cda9-4b34-99d8-e954031b3f4d,Namespace:kube-system,Attempt:1,} returns sandbox id \"79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01\"" Jul 2 09:09:21.833102 kubelet[2517]: E0702 09:09:21.833080 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:21.835118 containerd[1436]: time="2024-07-02T09:09:21.835085648Z" level=info msg="CreateContainer within sandbox \"79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:09:21.836260 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 53148 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:21.837768 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:21.843236 systemd-logind[1421]: New session 10 of user core. Jul 2 09:09:21.852232 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 09:09:21.854257 containerd[1436]: time="2024-07-02T09:09:21.853928932Z" level=info msg="CreateContainer within sandbox \"79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"83744de008133794fd0b99351d82436aecc1cedf2cc14f129bf80bfaa4a04fcc\"" Jul 2 09:09:21.855751 containerd[1436]: time="2024-07-02T09:09:21.854884681Z" level=info msg="StartContainer for \"83744de008133794fd0b99351d82436aecc1cedf2cc14f129bf80bfaa4a04fcc\"" Jul 2 09:09:21.882251 systemd[1]: Started cri-containerd-83744de008133794fd0b99351d82436aecc1cedf2cc14f129bf80bfaa4a04fcc.scope - libcontainer container 83744de008133794fd0b99351d82436aecc1cedf2cc14f129bf80bfaa4a04fcc. Jul 2 09:09:21.908817 containerd[1436]: time="2024-07-02T09:09:21.908775904Z" level=info msg="StartContainer for \"83744de008133794fd0b99351d82436aecc1cedf2cc14f129bf80bfaa4a04fcc\" returns successfully" Jul 2 09:09:21.990982 sshd[3970]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:21.999770 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:53148.service: Deactivated successfully. Jul 2 09:09:22.001675 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 09:09:22.003772 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Jul 2 09:09:22.016356 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:53162.service - OpenSSH per-connection server daemon (10.0.0.1:53162). Jul 2 09:09:22.018372 systemd-logind[1421]: Removed session 10. Jul 2 09:09:22.052477 sshd[4035]: Accepted publickey for core from 10.0.0.1 port 53162 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:22.055743 sshd[4035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:22.065515 systemd-logind[1421]: New session 11 of user core. Jul 2 09:09:22.071274 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 09:09:22.227464 sshd[4035]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:22.235316 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:53162.service: Deactivated successfully. Jul 2 09:09:22.239429 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 09:09:22.240556 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Jul 2 09:09:22.253418 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:53168.service - OpenSSH per-connection server daemon (10.0.0.1:53168). Jul 2 09:09:22.254811 systemd-logind[1421]: Removed session 11. Jul 2 09:09:22.289233 sshd[4051]: Accepted publickey for core from 10.0.0.1 port 53168 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:22.290816 sshd[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:22.295871 systemd-logind[1421]: New session 12 of user core. Jul 2 09:09:22.308239 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 09:09:22.407095 containerd[1436]: time="2024-07-02T09:09:22.406305745Z" level=info msg="StopPodSandbox for \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\"" Jul 2 09:09:22.427184 sshd[4051]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:22.433935 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:53168.service: Deactivated successfully. Jul 2 09:09:22.437578 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 09:09:22.439134 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Jul 2 09:09:22.440257 systemd-logind[1421]: Removed session 12. Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.456 [INFO][4079] k8s.go 608: Cleaning up netns ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.456 [INFO][4079] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" iface="eth0" netns="/var/run/netns/cni-0b84546e-50cf-d1da-f1a4-99f4b10a810f" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.456 [INFO][4079] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" iface="eth0" netns="/var/run/netns/cni-0b84546e-50cf-d1da-f1a4-99f4b10a810f" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.456 [INFO][4079] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" iface="eth0" netns="/var/run/netns/cni-0b84546e-50cf-d1da-f1a4-99f4b10a810f" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.456 [INFO][4079] k8s.go 615: Releasing IP address(es) ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.456 [INFO][4079] utils.go 188: Calico CNI releasing IP address ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.475 [INFO][4089] ipam_plugin.go 411: Releasing address using handleID ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.475 [INFO][4089] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.476 [INFO][4089] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.484 [WARNING][4089] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.484 [INFO][4089] ipam_plugin.go 439: Releasing address using workloadID ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.486 [INFO][4089] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:22.489462 containerd[1436]: 2024-07-02 09:09:22.488 [INFO][4079] k8s.go 621: Teardown processing complete. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:22.490504 containerd[1436]: time="2024-07-02T09:09:22.489593935Z" level=info msg="TearDown network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\" successfully" Jul 2 09:09:22.490504 containerd[1436]: time="2024-07-02T09:09:22.489622617Z" level=info msg="StopPodSandbox for \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\" returns successfully" Jul 2 09:09:22.490563 kubelet[2517]: E0702 09:09:22.490411 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:22.491761 containerd[1436]: time="2024-07-02T09:09:22.491592117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l4ngm,Uid:9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb,Namespace:kube-system,Attempt:1,}" Jul 2 09:09:22.534175 kubelet[2517]: E0702 09:09:22.534036 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:22.546271 kubelet[2517]: I0702 09:09:22.546184 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zr8hg" podStartSLOduration=29.546142748 podStartE2EDuration="29.546142748s" podCreationTimestamp="2024-07-02 09:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:09:22.545505663 +0000 UTC m=+45.224877680" watchObservedRunningTime="2024-07-02 09:09:22.546142748 +0000 UTC m=+45.225514685" Jul 2 09:09:22.597583 systemd[1]: run-netns-cni\x2d0b84546e\x2d50cf\x2dd1da\x2df1a4\x2d99f4b10a810f.mount: Deactivated successfully. Jul 2 09:09:22.648399 systemd-networkd[1381]: cali7f81e70ba90: Link UP Jul 2 09:09:22.648798 systemd-networkd[1381]: cali7f81e70ba90: Gained carrier Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.541 [INFO][4097] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--l4ngm-eth0 coredns-76f75df574- kube-system 9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb 835 0 2024-07-02 09:08:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-l4ngm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f81e70ba90 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.541 [INFO][4097] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.576 [INFO][4111] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" HandleID="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.594 [INFO][4111] ipam_plugin.go 264: Auto assigning IP ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" HandleID="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-l4ngm", "timestamp":"2024-07-02 09:09:22.576766762 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.594 [INFO][4111] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.594 [INFO][4111] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.594 [INFO][4111] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.604 [INFO][4111] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.615 [INFO][4111] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.623 [INFO][4111] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.627 [INFO][4111] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.630 [INFO][4111] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.631 [INFO][4111] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.632 [INFO][4111] ipam.go 1685: Creating new handle: k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.638 [INFO][4111] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.643 [INFO][4111] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.643 [INFO][4111] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" host="localhost" Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.643 [INFO][4111] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:22.663686 containerd[1436]: 2024-07-02 09:09:22.643 [INFO][4111] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" HandleID="k8s-pod-network.e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.664471 containerd[1436]: 2024-07-02 09:09:22.645 [INFO][4097] k8s.go 386: Populated endpoint ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l4ngm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-l4ngm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f81e70ba90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:22.664471 containerd[1436]: 2024-07-02 09:09:22.646 [INFO][4097] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.664471 containerd[1436]: 2024-07-02 09:09:22.646 [INFO][4097] dataplane_linux.go 68: Setting the host side veth name to cali7f81e70ba90 ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.664471 containerd[1436]: 2024-07-02 09:09:22.648 [INFO][4097] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.664471 containerd[1436]: 2024-07-02 09:09:22.649 [INFO][4097] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l4ngm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae", Pod:"coredns-76f75df574-l4ngm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f81e70ba90", MAC:"16:ff:7d:28:27:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:22.664471 containerd[1436]: 2024-07-02 09:09:22.658 [INFO][4097] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae" Namespace="kube-system" Pod="coredns-76f75df574-l4ngm" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:22.684507 containerd[1436]: time="2024-07-02T09:09:22.684408601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:09:22.684507 containerd[1436]: time="2024-07-02T09:09:22.684480646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:22.684507 containerd[1436]: time="2024-07-02T09:09:22.684495807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:09:22.684507 containerd[1436]: time="2024-07-02T09:09:22.684506328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:22.718368 systemd[1]: Started cri-containerd-e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae.scope - libcontainer container e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae. Jul 2 09:09:22.729952 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:09:22.754364 containerd[1436]: time="2024-07-02T09:09:22.754325242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l4ngm,Uid:9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb,Namespace:kube-system,Attempt:1,} returns sandbox id \"e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae\"" Jul 2 09:09:22.755579 kubelet[2517]: E0702 09:09:22.755115 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:22.758658 containerd[1436]: time="2024-07-02T09:09:22.758625308Z" level=info msg="CreateContainer within sandbox \"e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 09:09:22.772741 containerd[1436]: time="2024-07-02T09:09:22.772681705Z" level=info msg="CreateContainer within sandbox \"e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f5873f97517d6b0be3d4eae002ca5944c3f04a93ebfc389403049f37ae3d285\"" Jul 2 09:09:22.773303 containerd[1436]: time="2024-07-02T09:09:22.773276427Z" level=info msg="StartContainer for \"6f5873f97517d6b0be3d4eae002ca5944c3f04a93ebfc389403049f37ae3d285\"" Jul 2 09:09:22.799246 systemd[1]: Started cri-containerd-6f5873f97517d6b0be3d4eae002ca5944c3f04a93ebfc389403049f37ae3d285.scope - libcontainer container 6f5873f97517d6b0be3d4eae002ca5944c3f04a93ebfc389403049f37ae3d285. Jul 2 09:09:22.822837 containerd[1436]: time="2024-07-02T09:09:22.822781261Z" level=info msg="StartContainer for \"6f5873f97517d6b0be3d4eae002ca5944c3f04a93ebfc389403049f37ae3d285\" returns successfully" Jul 2 09:09:23.479208 systemd-networkd[1381]: cali08067ba963f: Gained IPv6LL Jul 2 09:09:23.537679 kubelet[2517]: E0702 09:09:23.537504 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:23.538890 kubelet[2517]: E0702 09:09:23.538015 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:23.548103 kubelet[2517]: I0702 09:09:23.548001 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l4ngm" podStartSLOduration=30.547961865 podStartE2EDuration="30.547961865s" podCreationTimestamp="2024-07-02 09:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 09:09:23.547368264 +0000 UTC m=+46.226740241" watchObservedRunningTime="2024-07-02 09:09:23.547961865 +0000 UTC m=+46.227333882" Jul 2 09:09:23.863187 systemd-networkd[1381]: cali7f81e70ba90: Gained IPv6LL Jul 2 09:09:24.406312 containerd[1436]: time="2024-07-02T09:09:24.406255381Z" level=info msg="StopPodSandbox for \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\"" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.449 [INFO][4239] k8s.go 608: Cleaning up netns ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.450 [INFO][4239] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" iface="eth0" netns="/var/run/netns/cni-0d596a4f-a16b-6a31-57ad-6eea151d874f" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.450 [INFO][4239] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" iface="eth0" netns="/var/run/netns/cni-0d596a4f-a16b-6a31-57ad-6eea151d874f" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.451 [INFO][4239] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" iface="eth0" netns="/var/run/netns/cni-0d596a4f-a16b-6a31-57ad-6eea151d874f" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.451 [INFO][4239] k8s.go 615: Releasing IP address(es) ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.451 [INFO][4239] utils.go 188: Calico CNI releasing IP address ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.468 [INFO][4246] ipam_plugin.go 411: Releasing address using handleID ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.468 [INFO][4246] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.468 [INFO][4246] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.476 [WARNING][4246] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.476 [INFO][4246] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.477 [INFO][4246] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:24.480463 containerd[1436]: 2024-07-02 09:09:24.479 [INFO][4239] k8s.go 621: Teardown processing complete. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:24.480944 containerd[1436]: time="2024-07-02T09:09:24.480894683Z" level=info msg="TearDown network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\" successfully" Jul 2 09:09:24.480944 containerd[1436]: time="2024-07-02T09:09:24.480932005Z" level=info msg="StopPodSandbox for \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\" returns successfully" Jul 2 09:09:24.481530 containerd[1436]: time="2024-07-02T09:09:24.481499884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvp65,Uid:31c1037b-708e-482e-8198-19d0b4cbcaf3,Namespace:calico-system,Attempt:1,}" Jul 2 09:09:24.483116 systemd[1]: run-netns-cni\x2d0d596a4f\x2da16b\x2d6a31\x2d57ad\x2d6eea151d874f.mount: Deactivated successfully. Jul 2 09:09:24.541045 kubelet[2517]: E0702 09:09:24.539503 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:24.541045 kubelet[2517]: E0702 09:09:24.539548 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:24.593299 systemd-networkd[1381]: cali88f06a6d65f: Link UP Jul 2 09:09:24.593634 systemd-networkd[1381]: cali88f06a6d65f: Gained carrier Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.519 [INFO][4253] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dvp65-eth0 csi-node-driver- calico-system 31c1037b-708e-482e-8198-19d0b4cbcaf3 866 0 2024-07-02 09:09:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-dvp65 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali88f06a6d65f [] []}} ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.519 [INFO][4253] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.542 [INFO][4268] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" HandleID="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.561 [INFO][4268] ipam_plugin.go 264: Auto assigning IP ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" HandleID="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000307940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dvp65", "timestamp":"2024-07-02 09:09:24.542655144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.561 [INFO][4268] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.561 [INFO][4268] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.561 [INFO][4268] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.563 [INFO][4268] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.570 [INFO][4268] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.576 [INFO][4268] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.578 [INFO][4268] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.580 [INFO][4268] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.580 [INFO][4268] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.581 [INFO][4268] ipam.go 1685: Creating new handle: k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.584 [INFO][4268] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.589 [INFO][4268] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.589 [INFO][4268] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" host="localhost" Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.589 [INFO][4268] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:24.605304 containerd[1436]: 2024-07-02 09:09:24.589 [INFO][4268] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" HandleID="k8s-pod-network.0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.605872 containerd[1436]: 2024-07-02 09:09:24.591 [INFO][4253] k8s.go 386: Populated endpoint ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvp65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c1037b-708e-482e-8198-19d0b4cbcaf3", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dvp65", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali88f06a6d65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:24.605872 containerd[1436]: 2024-07-02 09:09:24.591 [INFO][4253] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.605872 containerd[1436]: 2024-07-02 09:09:24.591 [INFO][4253] dataplane_linux.go 68: Setting the host side veth name to cali88f06a6d65f ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.605872 containerd[1436]: 2024-07-02 09:09:24.593 [INFO][4253] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.605872 containerd[1436]: 2024-07-02 09:09:24.594 [INFO][4253] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvp65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c1037b-708e-482e-8198-19d0b4cbcaf3", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c", Pod:"csi-node-driver-dvp65", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali88f06a6d65f", MAC:"22:fa:15:2a:41:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:24.605872 containerd[1436]: 2024-07-02 09:09:24.603 [INFO][4253] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c" Namespace="calico-system" Pod="csi-node-driver-dvp65" WorkloadEndpoint="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:24.622620 containerd[1436]: time="2024-07-02T09:09:24.622350191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:09:24.622620 containerd[1436]: time="2024-07-02T09:09:24.622415756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:24.622620 containerd[1436]: time="2024-07-02T09:09:24.622442637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:09:24.622620 containerd[1436]: time="2024-07-02T09:09:24.622458919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:24.651270 systemd[1]: Started cri-containerd-0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c.scope - libcontainer container 0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c. Jul 2 09:09:24.661553 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:09:24.671811 containerd[1436]: time="2024-07-02T09:09:24.671760448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dvp65,Uid:31c1037b-708e-482e-8198-19d0b4cbcaf3,Namespace:calico-system,Attempt:1,} returns sandbox id \"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c\"" Jul 2 09:09:24.673327 containerd[1436]: time="2024-07-02T09:09:24.673299753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 09:09:25.406376 containerd[1436]: time="2024-07-02T09:09:25.406338974Z" level=info msg="StopPodSandbox for \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\"" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.453 [INFO][4356] k8s.go 608: Cleaning up netns ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.453 [INFO][4356] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" iface="eth0" netns="/var/run/netns/cni-917fdd3f-8a61-97f3-4d14-e045f6032c52" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.453 [INFO][4356] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" iface="eth0" netns="/var/run/netns/cni-917fdd3f-8a61-97f3-4d14-e045f6032c52" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.453 [INFO][4356] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" iface="eth0" netns="/var/run/netns/cni-917fdd3f-8a61-97f3-4d14-e045f6032c52" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.453 [INFO][4356] k8s.go 615: Releasing IP address(es) ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.453 [INFO][4356] utils.go 188: Calico CNI releasing IP address ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.472 [INFO][4363] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.472 [INFO][4363] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.472 [INFO][4363] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.481 [WARNING][4363] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.481 [INFO][4363] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.482 [INFO][4363] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:25.487094 containerd[1436]: 2024-07-02 09:09:25.485 [INFO][4356] k8s.go 621: Teardown processing complete. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:25.487647 containerd[1436]: time="2024-07-02T09:09:25.487231527Z" level=info msg="TearDown network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\" successfully" Jul 2 09:09:25.487647 containerd[1436]: time="2024-07-02T09:09:25.487260128Z" level=info msg="StopPodSandbox for \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\" returns successfully" Jul 2 09:09:25.487880 containerd[1436]: time="2024-07-02T09:09:25.487852248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c6d665d6-qm2nc,Uid:804476dd-f79b-4477-bf83-c65ba06e121f,Namespace:calico-system,Attempt:1,}" Jul 2 09:09:25.489151 systemd[1]: run-netns-cni\x2d917fdd3f\x2d8a61\x2d97f3\x2d4d14\x2de045f6032c52.mount: Deactivated successfully. Jul 2 09:09:25.542260 kubelet[2517]: E0702 09:09:25.542229 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:25.596905 systemd-networkd[1381]: cali92edd4b0108: Link UP Jul 2 09:09:25.597411 systemd-networkd[1381]: cali92edd4b0108: Gained carrier Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.530 [INFO][4371] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0 calico-kube-controllers-84c6d665d6- calico-system 804476dd-f79b-4477-bf83-c65ba06e121f 888 0 2024-07-02 09:09:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84c6d665d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-84c6d665d6-qm2nc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92edd4b0108 [] []}} ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.530 [INFO][4371] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.553 [INFO][4389] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" HandleID="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.567 [INFO][4389] ipam_plugin.go 264: Auto assigning IP ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" HandleID="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058fde0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-84c6d665d6-qm2nc", "timestamp":"2024-07-02 09:09:25.55354466 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.567 [INFO][4389] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.567 [INFO][4389] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.567 [INFO][4389] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.569 [INFO][4389] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.572 [INFO][4389] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.576 [INFO][4389] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.577 [INFO][4389] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.579 [INFO][4389] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.579 [INFO][4389] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.580 [INFO][4389] ipam.go 1685: Creating new handle: k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.586 [INFO][4389] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.591 [INFO][4389] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.591 [INFO][4389] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" host="localhost" Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.591 [INFO][4389] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:25.610773 containerd[1436]: 2024-07-02 09:09:25.591 [INFO][4389] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" HandleID="k8s-pod-network.eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.611969 containerd[1436]: 2024-07-02 09:09:25.594 [INFO][4371] k8s.go 386: Populated endpoint ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0", GenerateName:"calico-kube-controllers-84c6d665d6-", Namespace:"calico-system", SelfLink:"", UID:"804476dd-f79b-4477-bf83-c65ba06e121f", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c6d665d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-84c6d665d6-qm2nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92edd4b0108", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:25.611969 containerd[1436]: 2024-07-02 09:09:25.594 [INFO][4371] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.611969 containerd[1436]: 2024-07-02 09:09:25.594 [INFO][4371] dataplane_linux.go 68: Setting the host side veth name to cali92edd4b0108 ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.611969 containerd[1436]: 2024-07-02 09:09:25.597 [INFO][4371] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.611969 containerd[1436]: 2024-07-02 09:09:25.599 [INFO][4371] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0", GenerateName:"calico-kube-controllers-84c6d665d6-", Namespace:"calico-system", SelfLink:"", UID:"804476dd-f79b-4477-bf83-c65ba06e121f", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c6d665d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae", Pod:"calico-kube-controllers-84c6d665d6-qm2nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92edd4b0108", MAC:"ba:91:42:98:e5:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:25.611969 containerd[1436]: 2024-07-02 09:09:25.607 [INFO][4371] k8s.go 500: Wrote updated endpoint to datastore ContainerID="eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae" Namespace="calico-system" Pod="calico-kube-controllers-84c6d665d6-qm2nc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:25.635595 containerd[1436]: time="2024-07-02T09:09:25.633812211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:09:25.635595 containerd[1436]: time="2024-07-02T09:09:25.633910898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:25.635595 containerd[1436]: time="2024-07-02T09:09:25.633934859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:09:25.635595 containerd[1436]: time="2024-07-02T09:09:25.633951900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:25.656230 systemd-networkd[1381]: cali88f06a6d65f: Gained IPv6LL Jul 2 09:09:25.661344 systemd[1]: Started cri-containerd-eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae.scope - libcontainer container eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae. Jul 2 09:09:25.678384 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:09:25.704105 containerd[1436]: time="2024-07-02T09:09:25.704034487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c6d665d6-qm2nc,Uid:804476dd-f79b-4477-bf83-c65ba06e121f,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae\"" Jul 2 09:09:25.712528 containerd[1436]: time="2024-07-02T09:09:25.712479734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 09:09:25.718387 containerd[1436]: time="2024-07-02T09:09:25.718350769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.045015773s" Jul 2 09:09:25.718570 containerd[1436]: time="2024-07-02T09:09:25.718489098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 09:09:25.720526 containerd[1436]: time="2024-07-02T09:09:25.720459590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 09:09:25.722189 containerd[1436]: time="2024-07-02T09:09:25.722155664Z" level=info msg="CreateContainer within sandbox \"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 09:09:25.728481 containerd[1436]: time="2024-07-02T09:09:25.728382642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:25.730360 containerd[1436]: time="2024-07-02T09:09:25.730320972Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:25.731821 containerd[1436]: time="2024-07-02T09:09:25.731770110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:25.748664 containerd[1436]: time="2024-07-02T09:09:25.748277338Z" level=info msg="CreateContainer within sandbox \"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a9b040d3e1f1b7292d0b4c35e6fe9626e4a5c68dbcd460fdd8f7e34f68b1b08c\"" Jul 2 09:09:25.748809 containerd[1436]: time="2024-07-02T09:09:25.748766931Z" level=info msg="StartContainer for \"a9b040d3e1f1b7292d0b4c35e6fe9626e4a5c68dbcd460fdd8f7e34f68b1b08c\"" Jul 2 09:09:25.773187 systemd[1]: Started cri-containerd-a9b040d3e1f1b7292d0b4c35e6fe9626e4a5c68dbcd460fdd8f7e34f68b1b08c.scope - libcontainer container a9b040d3e1f1b7292d0b4c35e6fe9626e4a5c68dbcd460fdd8f7e34f68b1b08c. Jul 2 09:09:25.800932 containerd[1436]: time="2024-07-02T09:09:25.800880111Z" level=info msg="StartContainer for \"a9b040d3e1f1b7292d0b4c35e6fe9626e4a5c68dbcd460fdd8f7e34f68b1b08c\" returns successfully" Jul 2 09:09:26.546990 kubelet[2517]: E0702 09:09:26.546660 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:26.679254 systemd-networkd[1381]: cali92edd4b0108: Gained IPv6LL Jul 2 09:09:27.109242 containerd[1436]: time="2024-07-02T09:09:27.109166749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:27.109945 containerd[1436]: time="2024-07-02T09:09:27.109902077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 09:09:27.110672 containerd[1436]: time="2024-07-02T09:09:27.110598922Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:27.112628 containerd[1436]: time="2024-07-02T09:09:27.112572410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:27.113745 containerd[1436]: time="2024-07-02T09:09:27.113344980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.392846668s" Jul 2 09:09:27.113745 containerd[1436]: time="2024-07-02T09:09:27.113382543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 09:09:27.115377 containerd[1436]: time="2024-07-02T09:09:27.114168034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 09:09:27.120141 containerd[1436]: time="2024-07-02T09:09:27.119968131Z" level=info msg="CreateContainer within sandbox \"eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 09:09:27.132537 containerd[1436]: time="2024-07-02T09:09:27.132499305Z" level=info msg="CreateContainer within sandbox \"eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"85ea1e451d7c78e84c19bbc310a48b104fa76812a14adf39b6e8ae15a2aec098\"" Jul 2 09:09:27.132960 containerd[1436]: time="2024-07-02T09:09:27.132928853Z" level=info msg="StartContainer for \"85ea1e451d7c78e84c19bbc310a48b104fa76812a14adf39b6e8ae15a2aec098\"" Jul 2 09:09:27.157211 systemd[1]: Started cri-containerd-85ea1e451d7c78e84c19bbc310a48b104fa76812a14adf39b6e8ae15a2aec098.scope - libcontainer container 85ea1e451d7c78e84c19bbc310a48b104fa76812a14adf39b6e8ae15a2aec098. Jul 2 09:09:27.198329 containerd[1436]: time="2024-07-02T09:09:27.198191216Z" level=info msg="StartContainer for \"85ea1e451d7c78e84c19bbc310a48b104fa76812a14adf39b6e8ae15a2aec098\" returns successfully" Jul 2 09:09:27.447422 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:53178.service - OpenSSH per-connection server daemon (10.0.0.1:53178). Jul 2 09:09:27.508286 sshd[4533]: Accepted publickey for core from 10.0.0.1 port 53178 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:27.510329 sshd[4533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:27.520354 systemd-logind[1421]: New session 13 of user core. Jul 2 09:09:27.526250 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 09:09:27.631211 kubelet[2517]: I0702 09:09:27.631123 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84c6d665d6-qm2nc" podStartSLOduration=26.222615479 podStartE2EDuration="27.631041512s" podCreationTimestamp="2024-07-02 09:09:00 +0000 UTC" firstStartedPulling="2024-07-02 09:09:25.705218167 +0000 UTC m=+48.384590144" lastFinishedPulling="2024-07-02 09:09:27.1136442 +0000 UTC m=+49.793016177" observedRunningTime="2024-07-02 09:09:27.572739682 +0000 UTC m=+50.252111659" watchObservedRunningTime="2024-07-02 09:09:27.631041512 +0000 UTC m=+50.310413489" Jul 2 09:09:27.697893 sshd[4533]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:27.708952 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:53178.service: Deactivated successfully. Jul 2 09:09:27.712572 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 09:09:27.713985 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Jul 2 09:09:27.719323 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:53188.service - OpenSSH per-connection server daemon (10.0.0.1:53188). Jul 2 09:09:27.720852 systemd-logind[1421]: Removed session 13. Jul 2 09:09:27.753293 sshd[4572]: Accepted publickey for core from 10.0.0.1 port 53188 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:27.754750 sshd[4572]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:27.758759 systemd-logind[1421]: New session 14 of user core. Jul 2 09:09:27.765282 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 09:09:28.041698 sshd[4572]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:28.055565 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:53188.service: Deactivated successfully. Jul 2 09:09:28.057365 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 09:09:28.058993 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Jul 2 09:09:28.072369 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:53198.service - OpenSSH per-connection server daemon (10.0.0.1:53198). Jul 2 09:09:28.073806 systemd-logind[1421]: Removed session 14. Jul 2 09:09:28.110595 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 53198 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:28.111894 sshd[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:28.117475 systemd-logind[1421]: New session 15 of user core. Jul 2 09:09:28.131232 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 09:09:28.341106 containerd[1436]: time="2024-07-02T09:09:28.340964766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:28.343329 containerd[1436]: time="2024-07-02T09:09:28.343288155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 09:09:28.344507 containerd[1436]: time="2024-07-02T09:09:28.344471110Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:28.347161 containerd[1436]: time="2024-07-02T09:09:28.347090038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:28.348073 containerd[1436]: time="2024-07-02T09:09:28.348011857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.2338067s" Jul 2 09:09:28.348073 containerd[1436]: time="2024-07-02T09:09:28.348069661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 09:09:28.352228 containerd[1436]: time="2024-07-02T09:09:28.352187524Z" level=info msg="CreateContainer within sandbox \"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 09:09:28.365867 containerd[1436]: time="2024-07-02T09:09:28.365808796Z" level=info msg="CreateContainer within sandbox \"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"767684942aea0a4f5ffbaa103581c0c1e19bb3d08f77c2151809daa0319a5337\"" Jul 2 09:09:28.367155 containerd[1436]: time="2024-07-02T09:09:28.366329270Z" level=info msg="StartContainer for \"767684942aea0a4f5ffbaa103581c0c1e19bb3d08f77c2151809daa0319a5337\"" Jul 2 09:09:28.402215 systemd[1]: Started cri-containerd-767684942aea0a4f5ffbaa103581c0c1e19bb3d08f77c2151809daa0319a5337.scope - libcontainer container 767684942aea0a4f5ffbaa103581c0c1e19bb3d08f77c2151809daa0319a5337. Jul 2 09:09:28.436491 containerd[1436]: time="2024-07-02T09:09:28.435235961Z" level=info msg="StartContainer for \"767684942aea0a4f5ffbaa103581c0c1e19bb3d08f77c2151809daa0319a5337\" returns successfully" Jul 2 09:09:28.484700 kubelet[2517]: I0702 09:09:28.484465 2517 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 09:09:28.484700 kubelet[2517]: I0702 09:09:28.484519 2517 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 09:09:29.551891 sshd[4584]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:29.559835 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:53198.service: Deactivated successfully. Jul 2 09:09:29.563376 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 09:09:29.567310 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Jul 2 09:09:29.574655 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:53212.service - OpenSSH per-connection server daemon (10.0.0.1:53212). Jul 2 09:09:29.576518 systemd-logind[1421]: Removed session 15. Jul 2 09:09:29.623966 sshd[4645]: Accepted publickey for core from 10.0.0.1 port 53212 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:29.625357 sshd[4645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:29.629283 systemd-logind[1421]: New session 16 of user core. Jul 2 09:09:29.639286 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 09:09:29.877973 sshd[4645]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:29.886378 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:53212.service: Deactivated successfully. Jul 2 09:09:29.889461 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 09:09:29.891117 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Jul 2 09:09:29.898511 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:53216.service - OpenSSH per-connection server daemon (10.0.0.1:53216). Jul 2 09:09:29.899202 systemd-logind[1421]: Removed session 16. Jul 2 09:09:29.933870 sshd[4657]: Accepted publickey for core from 10.0.0.1 port 53216 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:29.935286 sshd[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:29.940862 systemd-logind[1421]: New session 17 of user core. Jul 2 09:09:29.950265 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 09:09:30.070550 sshd[4657]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:30.073037 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Jul 2 09:09:30.074739 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:53216.service: Deactivated successfully. Jul 2 09:09:30.076804 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 09:09:30.078126 systemd-logind[1421]: Removed session 17. Jul 2 09:09:33.132582 kubelet[2517]: E0702 09:09:33.132206 2517 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 09:09:33.148144 kubelet[2517]: I0702 09:09:33.146656 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-dvp65" podStartSLOduration=29.471291166 podStartE2EDuration="33.14661575s" podCreationTimestamp="2024-07-02 09:09:00 +0000 UTC" firstStartedPulling="2024-07-02 09:09:24.673032695 +0000 UTC m=+47.352404632" lastFinishedPulling="2024-07-02 09:09:28.348357239 +0000 UTC m=+51.027729216" observedRunningTime="2024-07-02 09:09:28.575514543 +0000 UTC m=+51.254886520" watchObservedRunningTime="2024-07-02 09:09:33.14661575 +0000 UTC m=+55.825987687" Jul 2 09:09:35.081848 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:47228.service - OpenSSH per-connection server daemon (10.0.0.1:47228). Jul 2 09:09:35.142932 sshd[4699]: Accepted publickey for core from 10.0.0.1 port 47228 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:35.144369 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:35.150276 systemd-logind[1421]: New session 18 of user core. Jul 2 09:09:35.162652 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 09:09:35.305967 sshd[4699]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:35.309602 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:47228.service: Deactivated successfully. Jul 2 09:09:35.311297 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 09:09:35.312016 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Jul 2 09:09:35.312867 systemd-logind[1421]: Removed session 18. Jul 2 09:09:37.391879 containerd[1436]: time="2024-07-02T09:09:37.391834823Z" level=info msg="StopPodSandbox for \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\"" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.440 [WARNING][4744] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0", GenerateName:"calico-kube-controllers-84c6d665d6-", Namespace:"calico-system", SelfLink:"", UID:"804476dd-f79b-4477-bf83-c65ba06e121f", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c6d665d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae", Pod:"calico-kube-controllers-84c6d665d6-qm2nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92edd4b0108", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.440 [INFO][4744] k8s.go 608: Cleaning up netns ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.440 [INFO][4744] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" iface="eth0" netns="" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.440 [INFO][4744] k8s.go 615: Releasing IP address(es) ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.440 [INFO][4744] utils.go 188: Calico CNI releasing IP address ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.467 [INFO][4753] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.467 [INFO][4753] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.467 [INFO][4753] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.485 [WARNING][4753] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.485 [INFO][4753] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.490 [INFO][4753] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:37.497172 containerd[1436]: 2024-07-02 09:09:37.491 [INFO][4744] k8s.go 621: Teardown processing complete. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.497911 containerd[1436]: time="2024-07-02T09:09:37.497186087Z" level=info msg="TearDown network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\" successfully" Jul 2 09:09:37.497911 containerd[1436]: time="2024-07-02T09:09:37.497210928Z" level=info msg="StopPodSandbox for \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\" returns successfully" Jul 2 09:09:37.500017 containerd[1436]: time="2024-07-02T09:09:37.499431816Z" level=info msg="RemovePodSandbox for \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\"" Jul 2 09:09:37.500017 containerd[1436]: time="2024-07-02T09:09:37.499467818Z" level=info msg="Forcibly stopping sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\"" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.545 [WARNING][4776] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0", GenerateName:"calico-kube-controllers-84c6d665d6-", Namespace:"calico-system", SelfLink:"", UID:"804476dd-f79b-4477-bf83-c65ba06e121f", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c6d665d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb69a6ed9d36a59efad9d847bac54614722300a8a1aa0695fe35cb3a5531c1ae", Pod:"calico-kube-controllers-84c6d665d6-qm2nc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92edd4b0108", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.545 [INFO][4776] k8s.go 608: Cleaning up netns ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.545 [INFO][4776] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" iface="eth0" netns="" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.545 [INFO][4776] k8s.go 615: Releasing IP address(es) ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.545 [INFO][4776] utils.go 188: Calico CNI releasing IP address ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.564 [INFO][4783] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.564 [INFO][4783] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.564 [INFO][4783] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.573 [WARNING][4783] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.573 [INFO][4783] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" HandleID="k8s-pod-network.4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Workload="localhost-k8s-calico--kube--controllers--84c6d665d6--qm2nc-eth0" Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.574 [INFO][4783] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:37.582917 containerd[1436]: 2024-07-02 09:09:37.578 [INFO][4776] k8s.go 621: Teardown processing complete. ContainerID="4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4" Jul 2 09:09:37.582917 containerd[1436]: time="2024-07-02T09:09:37.581906403Z" level=info msg="TearDown network for sandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\" successfully" Jul 2 09:09:37.597341 containerd[1436]: time="2024-07-02T09:09:37.597284808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:09:37.597457 containerd[1436]: time="2024-07-02T09:09:37.597415575Z" level=info msg="RemovePodSandbox \"4f4f662003908d7e3dba5f0b46eb9126d34aaaaa37c13a4e3bc5f4a8230495b4\" returns successfully" Jul 2 09:09:37.597988 containerd[1436]: time="2024-07-02T09:09:37.597953046Z" level=info msg="StopPodSandbox for \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\"" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.638 [WARNING][4806] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l4ngm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae", Pod:"coredns-76f75df574-l4ngm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f81e70ba90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.638 [INFO][4806] k8s.go 608: Cleaning up netns ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.638 [INFO][4806] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" iface="eth0" netns="" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.638 [INFO][4806] k8s.go 615: Releasing IP address(es) ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.638 [INFO][4806] utils.go 188: Calico CNI releasing IP address ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.661 [INFO][4813] ipam_plugin.go 411: Releasing address using handleID ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.661 [INFO][4813] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.661 [INFO][4813] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.672 [WARNING][4813] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.672 [INFO][4813] ipam_plugin.go 439: Releasing address using workloadID ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.674 [INFO][4813] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:37.678909 containerd[1436]: 2024-07-02 09:09:37.676 [INFO][4806] k8s.go 621: Teardown processing complete. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.678909 containerd[1436]: time="2024-07-02T09:09:37.678882264Z" level=info msg="TearDown network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\" successfully" Jul 2 09:09:37.678909 containerd[1436]: time="2024-07-02T09:09:37.678907426Z" level=info msg="StopPodSandbox for \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\" returns successfully" Jul 2 09:09:37.679471 containerd[1436]: time="2024-07-02T09:09:37.679343211Z" level=info msg="RemovePodSandbox for \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\"" Jul 2 09:09:37.679471 containerd[1436]: time="2024-07-02T09:09:37.679373573Z" level=info msg="Forcibly stopping sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\"" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.719 [WARNING][4836] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--l4ngm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9d2a366f-a8bd-4418-b8b4-ae8fc8bcb2eb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e63a29b8449ed789dd449544c21289e257887fae7eeca339fd6701cdc29c3dae", Pod:"coredns-76f75df574-l4ngm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f81e70ba90", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.719 [INFO][4836] k8s.go 608: Cleaning up netns ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.719 [INFO][4836] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" iface="eth0" netns="" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.719 [INFO][4836] k8s.go 615: Releasing IP address(es) ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.719 [INFO][4836] utils.go 188: Calico CNI releasing IP address ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.743 [INFO][4844] ipam_plugin.go 411: Releasing address using handleID ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.743 [INFO][4844] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.743 [INFO][4844] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.754 [WARNING][4844] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.754 [INFO][4844] ipam_plugin.go 439: Releasing address using workloadID ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" HandleID="k8s-pod-network.32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Workload="localhost-k8s-coredns--76f75df574--l4ngm-eth0" Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.759 [INFO][4844] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:37.763384 containerd[1436]: 2024-07-02 09:09:37.761 [INFO][4836] k8s.go 621: Teardown processing complete. ContainerID="32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503" Jul 2 09:09:37.763960 containerd[1436]: time="2024-07-02T09:09:37.763422370Z" level=info msg="TearDown network for sandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\" successfully" Jul 2 09:09:37.766813 containerd[1436]: time="2024-07-02T09:09:37.766774963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:09:37.766870 containerd[1436]: time="2024-07-02T09:09:37.766839007Z" level=info msg="RemovePodSandbox \"32ca2e97cff4a476caa66340ea8ad1ea08e6ffb231e4a69434ddccfa46517503\" returns successfully" Jul 2 09:09:37.767371 containerd[1436]: time="2024-07-02T09:09:37.767342796Z" level=info msg="StopPodSandbox for \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\"" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.805 [WARNING][4867] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvp65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c1037b-708e-482e-8198-19d0b4cbcaf3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c", Pod:"csi-node-driver-dvp65", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali88f06a6d65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.805 [INFO][4867] k8s.go 608: Cleaning up netns ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.805 [INFO][4867] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" iface="eth0" netns="" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.805 [INFO][4867] k8s.go 615: Releasing IP address(es) ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.805 [INFO][4867] utils.go 188: Calico CNI releasing IP address ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.835 [INFO][4874] ipam_plugin.go 411: Releasing address using handleID ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.835 [INFO][4874] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.835 [INFO][4874] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.844 [WARNING][4874] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.844 [INFO][4874] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.845 [INFO][4874] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:37.856089 containerd[1436]: 2024-07-02 09:09:37.852 [INFO][4867] k8s.go 621: Teardown processing complete. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.856491 containerd[1436]: time="2024-07-02T09:09:37.856162068Z" level=info msg="TearDown network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\" successfully" Jul 2 09:09:37.856491 containerd[1436]: time="2024-07-02T09:09:37.856189189Z" level=info msg="StopPodSandbox for \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\" returns successfully" Jul 2 09:09:37.856758 containerd[1436]: time="2024-07-02T09:09:37.856708619Z" level=info msg="RemovePodSandbox for \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\"" Jul 2 09:09:37.856793 containerd[1436]: time="2024-07-02T09:09:37.856755782Z" level=info msg="Forcibly stopping sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\"" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.896 [WARNING][4898] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dvp65-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"31c1037b-708e-482e-8198-19d0b4cbcaf3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0f33e36be05fe553863f5bf8cd47a105ee16cb7d0af4e879d099fa0d1436410c", Pod:"csi-node-driver-dvp65", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali88f06a6d65f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.896 [INFO][4898] k8s.go 608: Cleaning up netns ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.896 [INFO][4898] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" iface="eth0" netns="" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.896 [INFO][4898] k8s.go 615: Releasing IP address(es) ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.896 [INFO][4898] utils.go 188: Calico CNI releasing IP address ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.917 [INFO][4906] ipam_plugin.go 411: Releasing address using handleID ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.917 [INFO][4906] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.917 [INFO][4906] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.925 [WARNING][4906] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.925 [INFO][4906] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" HandleID="k8s-pod-network.2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Workload="localhost-k8s-csi--node--driver--dvp65-eth0" Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.926 [INFO][4906] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:37.934917 containerd[1436]: 2024-07-02 09:09:37.931 [INFO][4898] k8s.go 621: Teardown processing complete. ContainerID="2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d" Jul 2 09:09:37.934917 containerd[1436]: time="2024-07-02T09:09:37.934874678Z" level=info msg="TearDown network for sandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\" successfully" Jul 2 09:09:37.938967 containerd[1436]: time="2024-07-02T09:09:37.938917671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:09:37.939090 containerd[1436]: time="2024-07-02T09:09:37.938988475Z" level=info msg="RemovePodSandbox \"2eaafb3e0f59e0f1254103ac2f6f81f3c9658537f59892542addf75bfa6e9b1d\" returns successfully" Jul 2 09:09:37.939513 containerd[1436]: time="2024-07-02T09:09:37.939483223Z" level=info msg="StopPodSandbox for \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\"" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:37.983 [WARNING][4927] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--zr8hg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86a33c17-cda9-4b34-99d8-e954031b3f4d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01", Pod:"coredns-76f75df574-zr8hg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08067ba963f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:37.984 [INFO][4927] k8s.go 608: Cleaning up netns ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:37.984 [INFO][4927] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" iface="eth0" netns="" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:37.984 [INFO][4927] k8s.go 615: Releasing IP address(es) ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:37.984 [INFO][4927] utils.go 188: Calico CNI releasing IP address ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:38.012 [INFO][4934] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:38.013 [INFO][4934] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:38.013 [INFO][4934] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:38.021 [WARNING][4934] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:38.021 [INFO][4934] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:38.023 [INFO][4934] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:38.026315 containerd[1436]: 2024-07-02 09:09:38.024 [INFO][4927] k8s.go 621: Teardown processing complete. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.026710 containerd[1436]: time="2024-07-02T09:09:38.026355130Z" level=info msg="TearDown network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\" successfully" Jul 2 09:09:38.026710 containerd[1436]: time="2024-07-02T09:09:38.026380371Z" level=info msg="StopPodSandbox for \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\" returns successfully" Jul 2 09:09:38.027411 containerd[1436]: time="2024-07-02T09:09:38.027374108Z" level=info msg="RemovePodSandbox for \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\"" Jul 2 09:09:38.027485 containerd[1436]: time="2024-07-02T09:09:38.027416831Z" level=info msg="Forcibly stopping sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\"" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.063 [WARNING][4956] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--zr8hg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86a33c17-cda9-4b34-99d8-e954031b3f4d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79c7c1ad1268cb5c66910acdc2dab0524cf6824cebb7c15235ca7eb58c9c9e01", Pod:"coredns-76f75df574-zr8hg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08067ba963f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.063 [INFO][4956] k8s.go 608: Cleaning up netns ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.063 [INFO][4956] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" iface="eth0" netns="" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.063 [INFO][4956] k8s.go 615: Releasing IP address(es) ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.063 [INFO][4956] utils.go 188: Calico CNI releasing IP address ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.086 [INFO][4963] ipam_plugin.go 411: Releasing address using handleID ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.086 [INFO][4963] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.086 [INFO][4963] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.097 [WARNING][4963] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.097 [INFO][4963] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" HandleID="k8s-pod-network.0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Workload="localhost-k8s-coredns--76f75df574--zr8hg-eth0" Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.099 [INFO][4963] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:38.102999 containerd[1436]: 2024-07-02 09:09:38.100 [INFO][4956] k8s.go 621: Teardown processing complete. ContainerID="0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3" Jul 2 09:09:38.103408 containerd[1436]: time="2024-07-02T09:09:38.103037624Z" level=info msg="TearDown network for sandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\" successfully" Jul 2 09:09:38.105884 containerd[1436]: time="2024-07-02T09:09:38.105852825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 09:09:38.105981 containerd[1436]: time="2024-07-02T09:09:38.105909468Z" level=info msg="RemovePodSandbox \"0c8446624b711b5624d511dc16b408c306b23d422d0df979d3b1ce75bb3a05c3\" returns successfully" Jul 2 09:09:38.946375 kubelet[2517]: I0702 09:09:38.946325 2517 topology_manager.go:215] "Topology Admit Handler" podUID="d20cd0a6-b082-4092-8d0f-ecf8b9df59fc" podNamespace="calico-apiserver" podName="calico-apiserver-c7479c9d9-trbvg" Jul 2 09:09:38.954183 systemd[1]: Created slice kubepods-besteffort-podd20cd0a6_b082_4092_8d0f_ecf8b9df59fc.slice - libcontainer container kubepods-besteffort-podd20cd0a6_b082_4092_8d0f_ecf8b9df59fc.slice. Jul 2 09:09:39.121491 kubelet[2517]: I0702 09:09:39.121450 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d20cd0a6-b082-4092-8d0f-ecf8b9df59fc-calico-apiserver-certs\") pod \"calico-apiserver-c7479c9d9-trbvg\" (UID: \"d20cd0a6-b082-4092-8d0f-ecf8b9df59fc\") " pod="calico-apiserver/calico-apiserver-c7479c9d9-trbvg" Jul 2 09:09:39.121491 kubelet[2517]: I0702 09:09:39.121502 2517 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lggn8\" (UniqueName: \"kubernetes.io/projected/d20cd0a6-b082-4092-8d0f-ecf8b9df59fc-kube-api-access-lggn8\") pod \"calico-apiserver-c7479c9d9-trbvg\" (UID: \"d20cd0a6-b082-4092-8d0f-ecf8b9df59fc\") " pod="calico-apiserver/calico-apiserver-c7479c9d9-trbvg" Jul 2 09:09:39.223901 kubelet[2517]: E0702 09:09:39.222748 2517 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:09:39.223901 kubelet[2517]: E0702 09:09:39.222834 2517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d20cd0a6-b082-4092-8d0f-ecf8b9df59fc-calico-apiserver-certs podName:d20cd0a6-b082-4092-8d0f-ecf8b9df59fc nodeName:}" failed. No retries permitted until 2024-07-02 09:09:39.722813311 +0000 UTC m=+62.402185248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d20cd0a6-b082-4092-8d0f-ecf8b9df59fc-calico-apiserver-certs") pod "calico-apiserver-c7479c9d9-trbvg" (UID: "d20cd0a6-b082-4092-8d0f-ecf8b9df59fc") : secret "calico-apiserver-certs" not found Jul 2 09:09:39.726186 kubelet[2517]: E0702 09:09:39.726136 2517 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 09:09:39.726328 kubelet[2517]: E0702 09:09:39.726213 2517 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d20cd0a6-b082-4092-8d0f-ecf8b9df59fc-calico-apiserver-certs podName:d20cd0a6-b082-4092-8d0f-ecf8b9df59fc nodeName:}" failed. No retries permitted until 2024-07-02 09:09:40.726197422 +0000 UTC m=+63.405569399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d20cd0a6-b082-4092-8d0f-ecf8b9df59fc-calico-apiserver-certs") pod "calico-apiserver-c7479c9d9-trbvg" (UID: "d20cd0a6-b082-4092-8d0f-ecf8b9df59fc") : secret "calico-apiserver-certs" not found Jul 2 09:09:40.317040 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:42514.service - OpenSSH per-connection server daemon (10.0.0.1:42514). Jul 2 09:09:40.365223 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 42514 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:40.366653 sshd[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:40.372437 systemd-logind[1421]: New session 19 of user core. Jul 2 09:09:40.381216 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 09:09:40.498414 sshd[4995]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:40.502907 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:42514.service: Deactivated successfully. Jul 2 09:09:40.506829 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 09:09:40.507497 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Jul 2 09:09:40.508436 systemd-logind[1421]: Removed session 19. Jul 2 09:09:40.759389 containerd[1436]: time="2024-07-02T09:09:40.759348255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7479c9d9-trbvg,Uid:d20cd0a6-b082-4092-8d0f-ecf8b9df59fc,Namespace:calico-apiserver,Attempt:0,}" Jul 2 09:09:40.888861 systemd-networkd[1381]: calid880339b5d5: Link UP Jul 2 09:09:40.890186 systemd-networkd[1381]: calid880339b5d5: Gained carrier Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.802 [INFO][5010] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0 calico-apiserver-c7479c9d9- calico-apiserver d20cd0a6-b082-4092-8d0f-ecf8b9df59fc 1057 0 2024-07-02 09:09:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c7479c9d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c7479c9d9-trbvg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid880339b5d5 [] []}} ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.803 [INFO][5010] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.835 [INFO][5023] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" HandleID="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Workload="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.846 [INFO][5023] ipam_plugin.go 264: Auto assigning IP ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" HandleID="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Workload="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c7479c9d9-trbvg", "timestamp":"2024-07-02 09:09:40.835472867 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.846 [INFO][5023] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.846 [INFO][5023] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.846 [INFO][5023] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.848 [INFO][5023] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.855 [INFO][5023] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.861 [INFO][5023] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.863 [INFO][5023] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.866 [INFO][5023] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.866 [INFO][5023] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.868 [INFO][5023] ipam.go 1685: Creating new handle: k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.876 [INFO][5023] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.881 [INFO][5023] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.881 [INFO][5023] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" host="localhost" Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.881 [INFO][5023] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 09:09:40.902983 containerd[1436]: 2024-07-02 09:09:40.881 [INFO][5023] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" HandleID="k8s-pod-network.d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Workload="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" Jul 2 09:09:40.904498 containerd[1436]: 2024-07-02 09:09:40.884 [INFO][5010] k8s.go 386: Populated endpoint ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0", GenerateName:"calico-apiserver-c7479c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d20cd0a6-b082-4092-8d0f-ecf8b9df59fc", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7479c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c7479c9d9-trbvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid880339b5d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:40.904498 containerd[1436]: 2024-07-02 09:09:40.884 [INFO][5010] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" Jul 2 09:09:40.904498 containerd[1436]: 2024-07-02 09:09:40.884 [INFO][5010] dataplane_linux.go 68: Setting the host side veth name to calid880339b5d5 ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" Jul 2 09:09:40.904498 containerd[1436]: 2024-07-02 09:09:40.889 [INFO][5010] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" Jul 2 09:09:40.904498 containerd[1436]: 2024-07-02 09:09:40.889 [INFO][5010] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0", GenerateName:"calico-apiserver-c7479c9d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d20cd0a6-b082-4092-8d0f-ecf8b9df59fc", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 9, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7479c9d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f", Pod:"calico-apiserver-c7479c9d9-trbvg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid880339b5d5", MAC:"76:ab:a8:43:0a:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 09:09:40.904498 containerd[1436]: 2024-07-02 09:09:40.899 [INFO][5010] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f" Namespace="calico-apiserver" Pod="calico-apiserver-c7479c9d9-trbvg" WorkloadEndpoint="localhost-k8s-calico--apiserver--c7479c9d9--trbvg-eth0" Jul 2 09:09:40.925149 containerd[1436]: time="2024-07-02T09:09:40.924679631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 09:09:40.925149 containerd[1436]: time="2024-07-02T09:09:40.925072047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:40.925149 containerd[1436]: time="2024-07-02T09:09:40.925089328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 09:09:40.925149 containerd[1436]: time="2024-07-02T09:09:40.925099088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 09:09:40.953234 systemd[1]: Started cri-containerd-d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f.scope - libcontainer container d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f. Jul 2 09:09:40.963884 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 09:09:40.979709 containerd[1436]: time="2024-07-02T09:09:40.979664151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7479c9d9-trbvg,Uid:d20cd0a6-b082-4092-8d0f-ecf8b9df59fc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f\"" Jul 2 09:09:40.982555 containerd[1436]: time="2024-07-02T09:09:40.981036168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 09:09:41.738033 systemd[1]: run-containerd-runc-k8s.io-d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f-runc.fUX70q.mount: Deactivated successfully. Jul 2 09:09:42.447908 containerd[1436]: time="2024-07-02T09:09:42.447358964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:42.448804 containerd[1436]: time="2024-07-02T09:09:42.448767702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 09:09:42.457478 containerd[1436]: time="2024-07-02T09:09:42.457417755Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:42.470094 containerd[1436]: time="2024-07-02T09:09:42.470007283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 09:09:42.470790 containerd[1436]: time="2024-07-02T09:09:42.470611359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 1.489504629s" Jul 2 09:09:42.470790 containerd[1436]: time="2024-07-02T09:09:42.470644917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 09:09:42.473421 containerd[1436]: time="2024-07-02T09:09:42.473387238Z" level=info msg="CreateContainer within sandbox \"d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 09:09:42.482675 containerd[1436]: time="2024-07-02T09:09:42.482588131Z" level=info msg="CreateContainer within sandbox \"d4be599f3fe9ffa6556b4eece2a0d3400271c8c412308fedbb30b8cfcde6b95f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9a0ee20cb844c4f41a8ec572d6957507ef16ba97c19a4d938270589e955a519\"" Jul 2 09:09:42.484081 containerd[1436]: time="2024-07-02T09:09:42.483535863Z" level=info msg="StartContainer for \"b9a0ee20cb844c4f41a8ec572d6957507ef16ba97c19a4d938270589e955a519\"" Jul 2 09:09:42.515376 systemd[1]: Started cri-containerd-b9a0ee20cb844c4f41a8ec572d6957507ef16ba97c19a4d938270589e955a519.scope - libcontainer container b9a0ee20cb844c4f41a8ec572d6957507ef16ba97c19a4d938270589e955a519. Jul 2 09:09:42.553148 systemd-networkd[1381]: calid880339b5d5: Gained IPv6LL Jul 2 09:09:42.608515 containerd[1436]: time="2024-07-02T09:09:42.608037961Z" level=info msg="StartContainer for \"b9a0ee20cb844c4f41a8ec572d6957507ef16ba97c19a4d938270589e955a519\" returns successfully" Jul 2 09:09:42.627796 kubelet[2517]: I0702 09:09:42.627752 2517 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c7479c9d9-trbvg" podStartSLOduration=3.137695111 podStartE2EDuration="4.627648299s" podCreationTimestamp="2024-07-02 09:09:38 +0000 UTC" firstStartedPulling="2024-07-02 09:09:40.980822839 +0000 UTC m=+63.660194816" lastFinishedPulling="2024-07-02 09:09:42.470776067 +0000 UTC m=+65.150148004" observedRunningTime="2024-07-02 09:09:42.624615879 +0000 UTC m=+65.303987856" watchObservedRunningTime="2024-07-02 09:09:42.627648299 +0000 UTC m=+65.307020276" Jul 2 09:09:45.512462 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:42524.service - OpenSSH per-connection server daemon (10.0.0.1:42524). Jul 2 09:09:45.566623 sshd[5142]: Accepted publickey for core from 10.0.0.1 port 42524 ssh2: RSA SHA256:ITYl1npsZrK81BYOVTGvToao81d3ICeCJ/pBQYbYY/k Jul 2 09:09:45.568192 sshd[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 09:09:45.575888 systemd-logind[1421]: New session 20 of user core. Jul 2 09:09:45.592257 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 09:09:45.738729 sshd[5142]: pam_unix(sshd:session): session closed for user core Jul 2 09:09:45.743103 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:42524.service: Deactivated successfully. Jul 2 09:09:45.744950 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 09:09:45.746237 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Jul 2 09:09:45.747363 systemd-logind[1421]: Removed session 20.