Sep 4 17:06:05.924439 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:06:05.924465 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:06:05.924475 kernel: KASLR enabled Sep 4 17:06:05.924481 kernel: efi: EFI v2.7 by EDK II Sep 4 17:06:05.924487 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:06:05.924492 kernel: random: crng init done Sep 4 17:06:05.924499 kernel: ACPI: Early table checksum verification disabled Sep 4 17:06:05.924505 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:06:05.924512 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:06:05.924519 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924525 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924531 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924537 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924543 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924550 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924559 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924566 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924572 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:06:05.924579 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:06:05.924585 kernel: NUMA: Failed to initialise from firmware Sep 4 17:06:05.924592 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:06:05.924598 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 4 17:06:05.924604 kernel: Zone ranges: Sep 4 17:06:05.924611 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:06:05.924617 kernel: DMA32 empty Sep 4 17:06:05.924625 kernel: Normal empty Sep 4 17:06:05.924631 kernel: Movable zone start for each node Sep 4 17:06:05.924637 kernel: Early memory node ranges Sep 4 17:06:05.924644 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:06:05.924650 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:06:05.924656 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:06:05.924662 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:06:05.924669 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:06:05.924675 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:06:05.924681 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:06:05.924688 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:06:05.924701 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:06:05.924710 kernel: psci: probing for conduit method from ACPI. Sep 4 17:06:05.924716 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:06:05.924722 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:06:05.924732 kernel: psci: Trusted OS migration not required Sep 4 17:06:05.924738 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:06:05.924745 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:06:05.924753 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:06:05.924760 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:06:05.924767 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:06:05.924773 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:06:05.924780 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:06:05.924787 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:06:05.924793 kernel: CPU features: detected: Spectre-v4 Sep 4 17:06:05.924800 kernel: CPU features: detected: Spectre-BHB Sep 4 17:06:05.924806 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:06:05.924813 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:06:05.924821 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:06:05.924828 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:06:05.924834 kernel: alternatives: applying boot alternatives Sep 4 17:06:05.924842 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:06:05.924849 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:06:05.924856 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:06:05.924863 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:06:05.924870 kernel: Fallback order for Node 0: 0 Sep 4 17:06:05.924876 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:06:05.924883 kernel: Policy zone: DMA Sep 4 17:06:05.924890 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:06:05.924897 kernel: software IO TLB: area num 4. Sep 4 17:06:05.924904 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:06:05.924912 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Sep 4 17:06:05.924919 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:06:05.924926 kernel: trace event string verifier disabled Sep 4 17:06:05.924932 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:06:05.924940 kernel: rcu: RCU event tracing is enabled. Sep 4 17:06:05.924946 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:06:05.924953 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:06:05.924960 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:06:05.924967 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:06:05.924974 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:06:05.924983 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:06:05.924990 kernel: GICv3: 256 SPIs implemented Sep 4 17:06:05.924996 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:06:05.925003 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:06:05.925010 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:06:05.925016 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:06:05.925023 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:06:05.925030 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:06:05.925037 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:06:05.925044 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:06:05.925051 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:06:05.925059 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:06:05.925066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:06:05.925073 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:06:05.925080 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:06:05.925087 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:06:05.925093 kernel: arm-pv: using stolen time PV Sep 4 17:06:05.925101 kernel: Console: colour dummy device 80x25 Sep 4 17:06:05.925108 kernel: ACPI: Core revision 20230628 Sep 4 17:06:05.925115 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:06:05.925144 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:06:05.925153 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:06:05.925160 kernel: SELinux: Initializing. Sep 4 17:06:05.925167 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:06:05.925174 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:06:05.925182 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:06:05.925197 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:06:05.925204 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:06:05.925212 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:06:05.925219 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:06:05.925225 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:06:05.925234 kernel: Remapping and enabling EFI services. Sep 4 17:06:05.925242 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:06:05.925249 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:06:05.925256 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:06:05.925263 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:06:05.925271 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:06:05.925278 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:06:05.925285 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:06:05.925292 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:06:05.925300 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:06:05.925308 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:06:05.925320 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:06:05.925328 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:06:05.925336 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:06:05.925343 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:06:05.925351 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:06:05.925358 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:06:05.925366 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:06:05.925375 kernel: SMP: Total of 4 processors activated. Sep 4 17:06:05.925382 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:06:05.925390 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:06:05.925397 kernel: CPU features: detected: Common not Private translations Sep 4 17:06:05.925405 kernel: CPU features: detected: CRC32 instructions Sep 4 17:06:05.925412 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:06:05.925420 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:06:05.925428 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:06:05.925437 kernel: CPU features: detected: Privileged Access Never Sep 4 17:06:05.925445 kernel: CPU features: detected: RAS Extension Support Sep 4 17:06:05.925452 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:06:05.925460 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:06:05.925467 kernel: alternatives: applying system-wide alternatives Sep 4 17:06:05.925475 kernel: devtmpfs: initialized Sep 4 17:06:05.925482 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:06:05.925490 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:06:05.925498 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:06:05.925506 kernel: SMBIOS 3.0.0 present. Sep 4 17:06:05.925514 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:06:05.925521 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:06:05.925529 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:06:05.925537 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:06:05.925544 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:06:05.925552 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:06:05.925559 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Sep 4 17:06:05.925567 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:06:05.925576 kernel: cpuidle: using governor menu Sep 4 17:06:05.925583 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:06:05.925591 kernel: ASID allocator initialised with 32768 entries Sep 4 17:06:05.925598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:06:05.925606 kernel: Serial: AMBA PL011 UART driver Sep 4 17:06:05.925613 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:06:05.925620 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:06:05.925628 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:06:05.925635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:06:05.925644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:06:05.925652 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:06:05.925659 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:06:05.925666 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:06:05.925674 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:06:05.925681 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:06:05.925689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:06:05.925702 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:06:05.925710 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:06:05.925719 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:06:05.925727 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:06:05.925734 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:06:05.925741 kernel: ACPI: Interpreter enabled Sep 4 17:06:05.925749 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:06:05.925756 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:06:05.925763 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:06:05.925771 kernel: printk: console [ttyAMA0] enabled Sep 4 17:06:05.925778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:06:05.925923 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:06:05.926002 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:06:05.926070 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:06:05.926167 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:06:05.926234 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:06:05.926244 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:06:05.926252 kernel: PCI host bridge to bus 0000:00 Sep 4 17:06:05.926325 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:06:05.926384 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:06:05.926444 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:06:05.926502 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:06:05.926606 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:06:05.926690 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:06:05.926773 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:06:05.926846 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:06:05.926914 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:06:05.926978 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:06:05.927042 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:06:05.927106 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:06:05.927196 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:06:05.927256 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:06:05.927321 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:06:05.927331 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:06:05.927339 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:06:05.927346 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:06:05.927354 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:06:05.927361 kernel: iommu: Default domain type: Translated Sep 4 17:06:05.927369 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:06:05.927376 kernel: efivars: Registered efivars operations Sep 4 17:06:05.927386 kernel: vgaarb: loaded Sep 4 17:06:05.927393 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:06:05.927400 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:06:05.927408 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:06:05.927416 kernel: pnp: PnP ACPI init Sep 4 17:06:05.927493 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:06:05.927504 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:06:05.927512 kernel: NET: Registered PF_INET protocol family Sep 4 17:06:05.927521 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:06:05.927529 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:06:05.927536 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:06:05.927543 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:06:05.927551 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:06:05.927558 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:06:05.927565 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:06:05.927573 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:06:05.927580 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:06:05.927589 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:06:05.927596 kernel: kvm [1]: HYP mode not available Sep 4 17:06:05.927604 kernel: Initialise system trusted keyrings Sep 4 17:06:05.927611 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:06:05.927618 kernel: Key type asymmetric registered Sep 4 17:06:05.927626 kernel: Asymmetric key parser 'x509' registered Sep 4 17:06:05.927633 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:06:05.927640 kernel: io scheduler mq-deadline registered Sep 4 17:06:05.927647 kernel: io scheduler kyber registered Sep 4 17:06:05.927656 kernel: io scheduler bfq registered Sep 4 17:06:05.927664 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:06:05.927671 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:06:05.927679 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:06:05.927760 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:06:05.927772 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:06:05.927779 kernel: thunder_xcv, ver 1.0 Sep 4 17:06:05.927786 kernel: thunder_bgx, ver 1.0 Sep 4 17:06:05.927794 kernel: nicpf, ver 1.0 Sep 4 17:06:05.927801 kernel: nicvf, ver 1.0 Sep 4 17:06:05.927882 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:06:05.927947 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:06:05 UTC (1725469565) Sep 4 17:06:05.927957 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:06:05.927965 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:06:05.927972 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:06:05.927980 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:06:05.927987 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:06:05.927997 kernel: Segment Routing with IPv6 Sep 4 17:06:05.928004 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:06:05.928011 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:06:05.928018 kernel: Key type dns_resolver registered Sep 4 17:06:05.928026 kernel: registered taskstats version 1 Sep 4 17:06:05.928033 kernel: Loading compiled-in X.509 certificates Sep 4 17:06:05.928040 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:06:05.928048 kernel: Key type .fscrypt registered Sep 4 17:06:05.928055 kernel: Key type fscrypt-provisioning registered Sep 4 17:06:05.928062 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:06:05.928071 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:06:05.928078 kernel: ima: No architecture policies found Sep 4 17:06:05.928086 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:06:05.928093 kernel: clk: Disabling unused clocks Sep 4 17:06:05.928100 kernel: Freeing unused kernel memory: 39040K Sep 4 17:06:05.928108 kernel: Run /init as init process Sep 4 17:06:05.928115 kernel: with arguments: Sep 4 17:06:05.928155 kernel: /init Sep 4 17:06:05.928166 kernel: with environment: Sep 4 17:06:05.928174 kernel: HOME=/ Sep 4 17:06:05.928181 kernel: TERM=linux Sep 4 17:06:05.928188 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:06:05.928197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:06:05.928207 systemd[1]: Detected virtualization kvm. Sep 4 17:06:05.928215 systemd[1]: Detected architecture arm64. Sep 4 17:06:05.928222 systemd[1]: Running in initrd. Sep 4 17:06:05.928232 systemd[1]: No hostname configured, using default hostname. Sep 4 17:06:05.928240 systemd[1]: Hostname set to . Sep 4 17:06:05.928249 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:06:05.928257 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:06:05.928265 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:06:05.928273 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:06:05.928281 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:06:05.928289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:06:05.928311 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:06:05.928319 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:06:05.928329 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:06:05.928338 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:06:05.928346 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:06:05.928353 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:06:05.928362 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:06:05.928373 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:06:05.928382 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:06:05.928390 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:06:05.928398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:06:05.928428 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:06:05.928450 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:06:05.928458 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:06:05.928467 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:06:05.928476 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:06:05.928484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:06:05.928492 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:06:05.928500 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:06:05.928508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:06:05.928516 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:06:05.928524 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:06:05.928532 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:06:05.928540 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:06:05.928551 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:06:05.928558 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:06:05.928567 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:06:05.928575 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:06:05.928584 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:06:05.928618 systemd-journald[237]: Collecting audit messages is disabled. Sep 4 17:06:05.928637 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:06:05.928646 systemd-journald[237]: Journal started Sep 4 17:06:05.928667 systemd-journald[237]: Runtime Journal (/run/log/journal/2abc9e8c569b4d338e1e5ad1a9472ffe) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:06:05.920531 systemd-modules-load[238]: Inserted module 'overlay' Sep 4 17:06:05.935942 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:06:05.937713 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:06:05.939184 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:06:05.941105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:06:05.943710 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 4 17:06:05.944554 kernel: Bridge firewalling registered Sep 4 17:06:05.945490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:06:05.949298 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:06:05.952153 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:06:05.955355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:06:05.957853 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:06:05.959396 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:06:05.961841 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:06:05.965728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:06:05.968466 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:06:05.970637 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:06:05.980105 dracut-cmdline[275]: dracut-dracut-053 Sep 4 17:06:05.982797 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:06:05.998321 systemd-resolved[279]: Positive Trust Anchors: Sep 4 17:06:05.998337 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:06:05.998367 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:06:06.003229 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 4 17:06:06.004227 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:06:06.007861 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:06:06.053153 kernel: SCSI subsystem initialized Sep 4 17:06:06.058141 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:06:06.066159 kernel: iscsi: registered transport (tcp) Sep 4 17:06:06.079373 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:06:06.079410 kernel: QLogic iSCSI HBA Driver Sep 4 17:06:06.130582 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:06:06.144306 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:06:06.164194 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:06:06.164258 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:06:06.165133 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:06:06.212149 kernel: raid6: neonx8 gen() 15763 MB/s Sep 4 17:06:06.229137 kernel: raid6: neonx4 gen() 15665 MB/s Sep 4 17:06:06.246134 kernel: raid6: neonx2 gen() 13230 MB/s Sep 4 17:06:06.263142 kernel: raid6: neonx1 gen() 10488 MB/s Sep 4 17:06:06.280134 kernel: raid6: int64x8 gen() 6949 MB/s Sep 4 17:06:06.297136 kernel: raid6: int64x4 gen() 7343 MB/s Sep 4 17:06:06.314147 kernel: raid6: int64x2 gen() 6125 MB/s Sep 4 17:06:06.331162 kernel: raid6: int64x1 gen() 5047 MB/s Sep 4 17:06:06.331221 kernel: raid6: using algorithm neonx8 gen() 15763 MB/s Sep 4 17:06:06.348169 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Sep 4 17:06:06.348221 kernel: raid6: using neon recovery algorithm Sep 4 17:06:06.355367 kernel: xor: measuring software checksum speed Sep 4 17:06:06.355421 kernel: 8regs : 19840 MB/sec Sep 4 17:06:06.355440 kernel: 32regs : 19640 MB/sec Sep 4 17:06:06.356285 kernel: arm64_neon : 27224 MB/sec Sep 4 17:06:06.356305 kernel: xor: using function: arm64_neon (27224 MB/sec) Sep 4 17:06:06.407398 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:06:06.417930 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:06:06.436336 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:06:06.448681 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 4 17:06:06.451907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:06:06.454053 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:06:06.470289 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Sep 4 17:06:06.496915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:06:06.512321 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:06:06.557931 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:06:06.568307 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:06:06.583206 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:06:06.584585 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:06:06.586343 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:06:06.588546 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:06:06.599353 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:06:06.611155 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:06:06.611343 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:06:06.613989 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:06:06.622152 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:06:06.625381 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:06:06.625403 kernel: GPT:9289727 != 19775487 Sep 4 17:06:06.625420 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:06:06.625430 kernel: GPT:9289727 != 19775487 Sep 4 17:06:06.625438 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:06:06.625447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:06:06.622263 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:06:06.627495 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:06:06.628569 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:06:06.628747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:06:06.632065 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:06:06.641148 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (527) Sep 4 17:06:06.645445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:06:06.648654 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (525) Sep 4 17:06:06.658091 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:06:06.662674 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:06:06.665214 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:06:06.675551 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:06:06.679496 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:06:06.680770 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:06:06.690333 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:06:06.692398 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:06:06.699308 disk-uuid[555]: Primary Header is updated. Sep 4 17:06:06.699308 disk-uuid[555]: Secondary Entries is updated. Sep 4 17:06:06.699308 disk-uuid[555]: Secondary Header is updated. Sep 4 17:06:06.703144 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:06:06.720142 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:06:07.713149 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:06:07.717527 disk-uuid[556]: The operation has completed successfully. Sep 4 17:06:07.744737 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:06:07.744842 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:06:07.759312 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:06:07.765174 sh[578]: Success Sep 4 17:06:07.793328 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:06:07.838925 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:06:07.853728 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:06:07.855547 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:06:07.866451 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:06:07.866492 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:06:07.866503 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:06:07.867271 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:06:07.868418 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:06:07.871587 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:06:07.873007 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:06:07.881329 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:06:07.882789 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:06:07.890696 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:06:07.890735 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:06:07.890754 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:06:07.894175 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:06:07.901845 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:06:07.903612 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:06:07.909599 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:06:07.917302 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:06:07.995806 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:06:08.006330 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:06:08.040739 systemd-networkd[767]: lo: Link UP Sep 4 17:06:08.040752 systemd-networkd[767]: lo: Gained carrier Sep 4 17:06:08.041496 systemd-networkd[767]: Enumeration completed Sep 4 17:06:08.041752 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:06:08.042297 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:06:08.042300 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:06:08.043454 systemd-networkd[767]: eth0: Link UP Sep 4 17:06:08.043457 systemd-networkd[767]: eth0: Gained carrier Sep 4 17:06:08.043464 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:06:08.044664 systemd[1]: Reached target network.target - Network. Sep 4 17:06:08.054874 ignition[669]: Ignition 2.18.0 Sep 4 17:06:08.054886 ignition[669]: Stage: fetch-offline Sep 4 17:06:08.054924 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:06:08.054933 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:06:08.055024 ignition[669]: parsed url from cmdline: "" Sep 4 17:06:08.055027 ignition[669]: no config URL provided Sep 4 17:06:08.055032 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:06:08.055040 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:06:08.060171 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:06:08.055067 ignition[669]: op(1): [started] loading QEMU firmware config module Sep 4 17:06:08.055072 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:06:08.074035 ignition[669]: op(1): [finished] loading QEMU firmware config module Sep 4 17:06:08.115819 ignition[669]: parsing config with SHA512: e8602e4800fc64fbe8c77c50a7c031b9e5c69a46b1fa515e895a8c6f99a148594fc7bd7f9aa337cc68a281000c53ac4eeee22c351ae4e9c77803d3f2700e3bc8 Sep 4 17:06:08.120275 unknown[669]: fetched base config from "system" Sep 4 17:06:08.120285 unknown[669]: fetched user config from "qemu" Sep 4 17:06:08.120731 ignition[669]: fetch-offline: fetch-offline passed Sep 4 17:06:08.122514 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:06:08.120790 ignition[669]: Ignition finished successfully Sep 4 17:06:08.124106 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:06:08.128343 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:06:08.140969 ignition[774]: Ignition 2.18.0 Sep 4 17:06:08.140980 ignition[774]: Stage: kargs Sep 4 17:06:08.141154 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:06:08.143911 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:06:08.141164 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:06:08.142087 ignition[774]: kargs: kargs passed Sep 4 17:06:08.142143 ignition[774]: Ignition finished successfully Sep 4 17:06:08.150284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:06:08.161968 ignition[783]: Ignition 2.18.0 Sep 4 17:06:08.161979 ignition[783]: Stage: disks Sep 4 17:06:08.162171 ignition[783]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:06:08.164808 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:06:08.162181 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:06:08.166136 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:06:08.163147 ignition[783]: disks: disks passed Sep 4 17:06:08.163197 ignition[783]: Ignition finished successfully Sep 4 17:06:08.169311 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:06:08.170823 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:06:08.172097 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:06:08.173961 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:06:08.185272 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:06:08.194404 systemd-resolved[279]: Detected conflict on linux IN A 10.0.0.15 Sep 4 17:06:08.194415 systemd-resolved[279]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Sep 4 17:06:08.197446 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:06:08.199694 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:06:08.202048 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:06:08.250145 kernel: EXT4-fs (vda9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:06:08.250395 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:06:08.251690 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:06:08.263212 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:06:08.268055 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:06:08.269077 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:06:08.269142 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:06:08.269168 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:06:08.275785 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:06:08.279687 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:06:08.284082 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Sep 4 17:06:08.284106 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:06:08.284117 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:06:08.284137 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:06:08.289144 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:06:08.294707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:06:08.337071 initrd-setup-root[829]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:06:08.340222 initrd-setup-root[836]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:06:08.343444 initrd-setup-root[843]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:06:08.347277 initrd-setup-root[850]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:06:08.422004 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:06:08.433309 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:06:08.434717 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:06:08.441142 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:06:08.457436 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:06:08.459625 ignition[920]: INFO : Ignition 2.18.0 Sep 4 17:06:08.459625 ignition[920]: INFO : Stage: mount Sep 4 17:06:08.461083 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:06:08.461083 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:06:08.461083 ignition[920]: INFO : mount: mount passed Sep 4 17:06:08.461083 ignition[920]: INFO : Ignition finished successfully Sep 4 17:06:08.462483 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:06:08.471220 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:06:08.865382 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:06:08.875339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:06:08.883932 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) Sep 4 17:06:08.883969 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:06:08.883980 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:06:08.885556 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:06:08.888140 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:06:08.888982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:06:08.905452 ignition[950]: INFO : Ignition 2.18.0 Sep 4 17:06:08.905452 ignition[950]: INFO : Stage: files Sep 4 17:06:08.907020 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:06:08.907020 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:06:08.907020 ignition[950]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:06:08.910499 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:06:08.910499 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:06:08.910499 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:06:08.910499 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:06:08.910499 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:06:08.909666 unknown[950]: wrote ssh authorized keys file for user: core Sep 4 17:06:08.917593 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:06:08.917593 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:06:08.917593 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:06:08.917593 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:06:08.934019 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:06:08.971362 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:06:08.973502 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Sep 4 17:06:09.295714 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:06:09.573096 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:06:09.573096 ignition[950]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 4 17:06:09.581263 ignition[950]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:06:09.612370 ignition[950]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:06:09.614838 ignition[950]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:06:09.614838 ignition[950]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:06:09.614838 ignition[950]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:06:09.614838 ignition[950]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:06:09.614838 ignition[950]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:06:09.614838 ignition[950]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:06:09.614838 ignition[950]: INFO : files: files passed Sep 4 17:06:09.614838 ignition[950]: INFO : Ignition finished successfully Sep 4 17:06:09.617168 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:06:09.628601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:06:09.631728 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:06:09.634866 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:06:09.635310 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:06:09.642411 initrd-setup-root-after-ignition[978]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:06:09.647276 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:06:09.647276 initrd-setup-root-after-ignition[980]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:06:09.650351 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:06:09.652440 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:06:09.653897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:06:09.666300 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:06:09.695750 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:06:09.695896 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:06:09.698090 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:06:09.699801 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:06:09.701324 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:06:09.702191 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:06:09.727659 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:06:09.737339 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:06:09.747468 systemd[1]: Stopped target network.target - Network. Sep 4 17:06:09.748296 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:06:09.751158 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:06:09.753685 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:06:09.754784 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:06:09.754914 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:06:09.757379 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:06:09.759435 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:06:09.761021 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:06:09.762954 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:06:09.764977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:06:09.766784 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:06:09.768632 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:06:09.770468 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:06:09.772556 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:06:09.774013 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:06:09.775752 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:06:09.775887 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:06:09.778013 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:06:09.779264 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:06:09.781073 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:06:09.782347 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:06:09.783412 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:06:09.783541 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:06:09.785912 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:06:09.786036 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:06:09.787418 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:06:09.789082 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:06:09.794162 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:06:09.795230 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:06:09.796898 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:06:09.798314 systemd-networkd[767]: eth0: Gained IPv6LL Sep 4 17:06:09.799514 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:06:09.799609 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:06:09.800974 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:06:09.801081 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:06:09.802968 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:06:09.803083 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:06:09.804270 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:06:09.804374 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:06:09.812358 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:06:09.813529 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:06:09.813679 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:06:09.817202 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:06:09.819240 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:06:09.821466 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:06:09.825386 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:06:09.825613 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:06:09.828230 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 4 17:06:09.834505 ignition[1004]: INFO : Ignition 2.18.0 Sep 4 17:06:09.834505 ignition[1004]: INFO : Stage: umount Sep 4 17:06:09.834505 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:06:09.834505 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:06:09.834505 ignition[1004]: INFO : umount: umount passed Sep 4 17:06:09.834505 ignition[1004]: INFO : Ignition finished successfully Sep 4 17:06:09.829353 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:06:09.829471 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:06:09.835516 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:06:09.836417 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:06:09.836517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:06:09.840652 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:06:09.840790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:06:09.842835 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:06:09.842921 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:06:09.848066 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:06:09.848194 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:06:09.849559 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:06:09.849593 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:06:09.851361 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:06:09.851412 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:06:09.853234 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:06:09.853279 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:06:09.854970 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:06:09.855005 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:06:09.856749 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:06:09.856793 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:06:09.867324 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:06:09.868169 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:06:09.868233 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:06:09.870189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:06:09.870240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:06:09.871991 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:06:09.872037 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:06:09.873966 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:06:09.874027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:06:09.876021 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:06:09.886482 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:06:09.886598 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:06:09.888763 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:06:09.888907 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:06:09.891147 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:06:09.891208 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:06:09.892513 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:06:09.892546 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:06:09.894489 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:06:09.894543 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:06:09.896991 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:06:09.897044 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:06:09.900030 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:06:09.900084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:06:09.912325 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:06:09.913281 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:06:09.913349 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:06:09.915607 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:06:09.915659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:06:09.918052 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:06:09.918156 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:06:09.920366 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:06:09.920447 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:06:09.922405 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:06:09.922501 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:06:09.924869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:06:09.928068 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:06:09.940206 systemd[1]: Switching root. Sep 4 17:06:09.972085 systemd-journald[237]: Journal stopped Sep 4 17:06:10.758285 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 4 17:06:10.758344 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:06:10.758357 kernel: SELinux: policy capability open_perms=1 Sep 4 17:06:10.758367 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:06:10.758377 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:06:10.758388 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:06:10.758398 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:06:10.758407 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:06:10.758418 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:06:10.758432 kernel: audit: type=1403 audit(1725469570.167:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:06:10.758447 systemd[1]: Successfully loaded SELinux policy in 34.282ms. Sep 4 17:06:10.758464 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.123ms. Sep 4 17:06:10.758477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:06:10.758487 systemd[1]: Detected virtualization kvm. Sep 4 17:06:10.758498 systemd[1]: Detected architecture arm64. Sep 4 17:06:10.758508 systemd[1]: Detected first boot. Sep 4 17:06:10.758518 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:06:10.758530 zram_generator::config[1069]: No configuration found. Sep 4 17:06:10.758541 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:06:10.758552 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:06:10.758562 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:06:10.758573 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:06:10.758583 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:06:10.758594 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:06:10.758604 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:06:10.758614 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:06:10.758627 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:06:10.758637 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:06:10.758650 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:06:10.758661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:06:10.758682 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:06:10.758695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:06:10.758706 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:06:10.758718 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:06:10.758731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:06:10.758742 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:06:10.758752 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:06:10.758778 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:06:10.758788 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:06:10.758799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:06:10.758810 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:06:10.758821 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:06:10.758833 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:06:10.758844 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:06:10.758856 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:06:10.758890 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:06:10.758901 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:06:10.758911 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:06:10.758922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:06:10.758933 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:06:10.758944 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:06:10.758954 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:06:10.758966 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:06:10.758977 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:06:10.758988 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:06:10.758999 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:06:10.759009 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:06:10.759020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:06:10.759031 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:06:10.759042 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:06:10.759058 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:06:10.759068 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:06:10.759079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:06:10.759089 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:06:10.759099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:06:10.759110 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:06:10.759130 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 4 17:06:10.759146 kernel: fuse: init (API version 7.39) Sep 4 17:06:10.759156 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 4 17:06:10.759169 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:06:10.759179 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:06:10.759189 kernel: ACPI: bus type drm_connector registered Sep 4 17:06:10.759199 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:06:10.759209 kernel: loop: module loaded Sep 4 17:06:10.759220 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:06:10.759233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:06:10.759244 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:06:10.759254 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:06:10.759285 systemd-journald[1145]: Collecting audit messages is disabled. Sep 4 17:06:10.759307 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:06:10.759318 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:06:10.759342 systemd-journald[1145]: Journal started Sep 4 17:06:10.759365 systemd-journald[1145]: Runtime Journal (/run/log/journal/2abc9e8c569b4d338e1e5ad1a9472ffe) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:06:10.763844 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:06:10.764011 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:06:10.765337 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:06:10.766486 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:06:10.768090 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:06:10.769377 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:06:10.769550 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:06:10.770917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:06:10.771090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:06:10.772572 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:06:10.772772 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:06:10.774098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:06:10.774286 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:06:10.775773 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:06:10.775933 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:06:10.777281 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:06:10.777494 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:06:10.779052 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:06:10.780714 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:06:10.782551 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:06:10.794421 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:06:10.804230 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:06:10.806548 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:06:10.807459 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:06:10.812294 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:06:10.814986 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:06:10.816575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:06:10.818376 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:06:10.819394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:06:10.821397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:06:10.824594 systemd-journald[1145]: Time spent on flushing to /var/log/journal/2abc9e8c569b4d338e1e5ad1a9472ffe is 15.896ms for 844 entries. Sep 4 17:06:10.824594 systemd-journald[1145]: System Journal (/var/log/journal/2abc9e8c569b4d338e1e5ad1a9472ffe) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:06:10.853512 systemd-journald[1145]: Received client request to flush runtime journal. Sep 4 17:06:10.825843 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:06:10.828736 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:06:10.830114 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:06:10.831507 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:06:10.846419 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:06:10.847660 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:06:10.849546 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:06:10.858242 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:06:10.858580 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:06:10.860403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:06:10.862592 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 4 17:06:10.862610 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 4 17:06:10.867234 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:06:10.875367 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:06:10.903206 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:06:10.913444 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:06:10.926893 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 4 17:06:10.926915 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 4 17:06:10.931471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:06:11.292894 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:06:11.306333 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:06:11.326278 systemd-udevd[1227]: Using default interface naming scheme 'v255'. Sep 4 17:06:11.345025 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:06:11.363948 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:06:11.379314 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:06:11.381145 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 4 17:06:11.384152 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1235) Sep 4 17:06:11.400207 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1246) Sep 4 17:06:11.417286 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:06:11.444192 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:06:11.505206 systemd-networkd[1238]: lo: Link UP Sep 4 17:06:11.505219 systemd-networkd[1238]: lo: Gained carrier Sep 4 17:06:11.505898 systemd-networkd[1238]: Enumeration completed Sep 4 17:06:11.508115 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:06:11.508118 systemd-networkd[1238]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:06:11.508749 systemd-networkd[1238]: eth0: Link UP Sep 4 17:06:11.508753 systemd-networkd[1238]: eth0: Gained carrier Sep 4 17:06:11.508766 systemd-networkd[1238]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:06:11.509339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:06:11.510292 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:06:11.512704 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:06:11.517433 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:06:11.522286 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:06:11.536197 systemd-networkd[1238]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:06:11.543072 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:06:11.562557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:06:11.585735 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:06:11.587347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:06:11.595386 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:06:11.599241 lvm[1277]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:06:11.634814 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:06:11.636285 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:06:11.637592 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:06:11.637626 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:06:11.638462 systemd[1]: Reached target machines.target - Containers. Sep 4 17:06:11.640261 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:06:11.653287 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:06:11.655712 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:06:11.656835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:06:11.657843 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:06:11.660740 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:06:11.664432 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:06:11.668313 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:06:11.678417 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:06:11.683047 kernel: loop0: detected capacity change from 0 to 59688 Sep 4 17:06:11.683137 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:06:11.692802 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:06:11.693613 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:06:11.706161 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:06:11.746162 kernel: loop1: detected capacity change from 0 to 193208 Sep 4 17:06:11.796151 kernel: loop2: detected capacity change from 0 to 113672 Sep 4 17:06:11.834157 kernel: loop3: detected capacity change from 0 to 59688 Sep 4 17:06:11.845287 kernel: loop4: detected capacity change from 0 to 193208 Sep 4 17:06:11.855154 kernel: loop5: detected capacity change from 0 to 113672 Sep 4 17:06:11.863446 (sd-merge)[1303]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:06:11.863962 (sd-merge)[1303]: Merged extensions into '/usr'. Sep 4 17:06:11.867926 systemd[1]: Reloading requested from client PID 1288 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:06:11.867944 systemd[1]: Reloading... Sep 4 17:06:11.897829 zram_generator::config[1329]: No configuration found. Sep 4 17:06:11.923987 ldconfig[1284]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:06:12.003339 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:06:12.047677 systemd[1]: Reloading finished in 179 ms. Sep 4 17:06:12.061883 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:06:12.063397 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:06:12.078310 systemd[1]: Starting ensure-sysext.service... Sep 4 17:06:12.080341 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:06:12.084438 systemd[1]: Reloading requested from client PID 1370 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:06:12.084452 systemd[1]: Reloading... Sep 4 17:06:12.098850 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:06:12.099114 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:06:12.099777 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:06:12.099992 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Sep 4 17:06:12.100043 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Sep 4 17:06:12.102043 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:06:12.102055 systemd-tmpfiles[1377]: Skipping /boot Sep 4 17:06:12.110413 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:06:12.110428 systemd-tmpfiles[1377]: Skipping /boot Sep 4 17:06:12.118205 zram_generator::config[1400]: No configuration found. Sep 4 17:06:12.211976 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:06:12.256413 systemd[1]: Reloading finished in 171 ms. Sep 4 17:06:12.272976 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:06:12.286085 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:06:12.288599 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:06:12.290827 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:06:12.296321 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:06:12.301411 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:06:12.306966 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:06:12.309112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:06:12.313349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:06:12.317241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:06:12.318879 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:06:12.319588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:06:12.319762 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:06:12.321692 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:06:12.321839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:06:12.326635 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:06:12.329266 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:06:12.330943 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:06:12.332684 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:06:12.335317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:06:12.343152 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:06:12.349405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:06:12.352312 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:06:12.356392 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:06:12.361467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:06:12.362754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:06:12.364553 augenrules[1486]: No rules Sep 4 17:06:12.365151 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:06:12.369634 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:06:12.371845 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:06:12.373792 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:06:12.373952 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:06:12.375533 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:06:12.375685 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:06:12.377110 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:06:12.377267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:06:12.378884 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:06:12.379033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:06:12.380613 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:06:12.385625 systemd[1]: Finished ensure-sysext.service. Sep 4 17:06:12.390179 systemd-resolved[1449]: Positive Trust Anchors: Sep 4 17:06:12.390998 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:06:12.391090 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:06:12.392163 systemd-resolved[1449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:06:12.392198 systemd-resolved[1449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:06:12.400328 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:06:12.401566 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:06:12.402056 systemd-resolved[1449]: Defaulting to hostname 'linux'. Sep 4 17:06:12.407480 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:06:12.408925 systemd[1]: Reached target network.target - Network. Sep 4 17:06:12.409884 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:06:12.446044 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:06:12.446987 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:06:12.447037 systemd-timesyncd[1507]: Initial clock synchronization to Wed 2024-09-04 17:06:12.508787 UTC. Sep 4 17:06:12.447676 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:06:12.448686 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:06:12.449778 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:06:12.450934 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:06:12.452192 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:06:12.452235 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:06:12.453038 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:06:12.454224 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:06:12.455102 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:06:12.455998 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:06:12.457840 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:06:12.460487 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:06:12.462528 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:06:12.472186 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:06:12.473253 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:06:12.473974 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:06:12.475005 systemd[1]: System is tainted: cgroupsv1 Sep 4 17:06:12.475057 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:06:12.475079 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:06:12.476265 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:06:12.478319 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:06:12.480254 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:06:12.485295 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:06:12.486353 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:06:12.487519 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:06:12.491562 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:06:12.497049 jq[1513]: false Sep 4 17:06:12.497859 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:06:12.501332 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:06:12.508330 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:06:12.516446 extend-filesystems[1515]: Found loop3 Sep 4 17:06:12.516446 extend-filesystems[1515]: Found loop4 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found loop5 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda1 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda2 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda3 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found usr Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda4 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda6 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda7 Sep 4 17:06:12.519716 extend-filesystems[1515]: Found vda9 Sep 4 17:06:12.519716 extend-filesystems[1515]: Checking size of /dev/vda9 Sep 4 17:06:12.517298 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:06:12.521498 dbus-daemon[1512]: [system] SELinux support is enabled Sep 4 17:06:12.525497 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:06:12.530565 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:06:12.532240 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:06:12.538075 extend-filesystems[1515]: Resized partition /dev/vda9 Sep 4 17:06:12.538528 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:06:12.538800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:06:12.539069 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:06:12.539344 jq[1537]: true Sep 4 17:06:12.539351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:06:12.544582 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:06:12.544844 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:06:12.557575 extend-filesystems[1546]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:06:12.560455 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:06:12.560486 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:06:12.561169 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:06:12.561192 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1239) Sep 4 17:06:12.563916 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:06:12.563951 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:06:12.570867 jq[1544]: true Sep 4 17:06:12.572511 (ntainerd)[1545]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:06:12.582641 tar[1542]: linux-arm64/helm Sep 4 17:06:12.597181 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:06:12.618795 extend-filesystems[1546]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:06:12.618795 extend-filesystems[1546]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:06:12.618795 extend-filesystems[1546]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:06:12.627095 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Sep 4 17:06:12.619755 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:06:12.619993 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:06:12.632181 update_engine[1534]: I0904 17:06:12.630867 1534 main.cc:92] Flatcar Update Engine starting Sep 4 17:06:12.634507 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:06:12.635360 update_engine[1534]: I0904 17:06:12.635115 1534 update_check_scheduler.cc:74] Next update check in 4m34s Sep 4 17:06:12.636286 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:06:12.639205 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:06:12.639628 systemd-logind[1525]: New seat seat0. Sep 4 17:06:12.643616 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:06:12.646243 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:06:12.656575 bash[1574]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:06:12.659724 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:06:12.661804 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:06:12.707835 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:06:12.742231 systemd-networkd[1238]: eth0: Gained IPv6LL Sep 4 17:06:12.750420 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:06:12.752578 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:06:12.762194 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:06:12.777356 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:06:12.785952 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:06:12.808787 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:06:12.813697 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:06:12.813922 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:06:12.815929 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:06:12.822594 containerd[1545]: time="2024-09-04T17:06:12.822507040Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:06:12.860838 containerd[1545]: time="2024-09-04T17:06:12.860587880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:06:12.860838 containerd[1545]: time="2024-09-04T17:06:12.860644720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:06:12.862331 containerd[1545]: time="2024-09-04T17:06:12.862274080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:06:12.862430 containerd[1545]: time="2024-09-04T17:06:12.862411920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:06:12.862810 containerd[1545]: time="2024-09-04T17:06:12.862780880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:06:12.863050 containerd[1545]: time="2024-09-04T17:06:12.862897360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.863468160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.863533760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.863546600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.863600040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.863807200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.863824960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.863835800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.864020080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.864036640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.864098400Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:06:12.864145 containerd[1545]: time="2024-09-04T17:06:12.864109280Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:06:12.867638 containerd[1545]: time="2024-09-04T17:06:12.867602240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:06:12.867638 containerd[1545]: time="2024-09-04T17:06:12.867636240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:06:12.867741 containerd[1545]: time="2024-09-04T17:06:12.867659040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:06:12.867741 containerd[1545]: time="2024-09-04T17:06:12.867701800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:06:12.867741 containerd[1545]: time="2024-09-04T17:06:12.867720080Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:06:12.867741 containerd[1545]: time="2024-09-04T17:06:12.867730400Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:06:12.867741 containerd[1545]: time="2024-09-04T17:06:12.867741760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:06:12.867902 containerd[1545]: time="2024-09-04T17:06:12.867868720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:06:12.867902 containerd[1545]: time="2024-09-04T17:06:12.867892160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:06:12.867952 containerd[1545]: time="2024-09-04T17:06:12.867906240Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:06:12.867952 containerd[1545]: time="2024-09-04T17:06:12.867920040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:06:12.867952 containerd[1545]: time="2024-09-04T17:06:12.867934360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.867952 containerd[1545]: time="2024-09-04T17:06:12.867951040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.868025 containerd[1545]: time="2024-09-04T17:06:12.867966400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.868025 containerd[1545]: time="2024-09-04T17:06:12.867979880Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.868025 containerd[1545]: time="2024-09-04T17:06:12.867998640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.868073 containerd[1545]: time="2024-09-04T17:06:12.868044360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.868073 containerd[1545]: time="2024-09-04T17:06:12.868062560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.868107 containerd[1545]: time="2024-09-04T17:06:12.868074720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:06:12.868239 containerd[1545]: time="2024-09-04T17:06:12.868210000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868630840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868683200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868701440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868727640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868854800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868868960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868880960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868892720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868905920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868919760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868932600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868944480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.868958160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:06:12.870333 containerd[1545]: time="2024-09-04T17:06:12.869108760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870609 containerd[1545]: time="2024-09-04T17:06:12.869150520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870609 containerd[1545]: time="2024-09-04T17:06:12.869165240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870609 containerd[1545]: time="2024-09-04T17:06:12.869179800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870609 containerd[1545]: time="2024-09-04T17:06:12.869196880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870609 containerd[1545]: time="2024-09-04T17:06:12.869210720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870609 containerd[1545]: time="2024-09-04T17:06:12.869223240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870609 containerd[1545]: time="2024-09-04T17:06:12.869235080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:06:12.870745 containerd[1545]: time="2024-09-04T17:06:12.869546000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:06:12.870745 containerd[1545]: time="2024-09-04T17:06:12.869604240Z" level=info msg="Connect containerd service" Sep 4 17:06:12.870745 containerd[1545]: time="2024-09-04T17:06:12.869635440Z" level=info msg="using legacy CRI server" Sep 4 17:06:12.870745 containerd[1545]: time="2024-09-04T17:06:12.869643720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:06:12.870745 containerd[1545]: time="2024-09-04T17:06:12.869804840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:06:12.871202 containerd[1545]: time="2024-09-04T17:06:12.871173800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:06:12.871305 containerd[1545]: time="2024-09-04T17:06:12.871289520Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:06:12.871387 containerd[1545]: time="2024-09-04T17:06:12.871369920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:06:12.871438 containerd[1545]: time="2024-09-04T17:06:12.871426240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:06:12.871500 containerd[1545]: time="2024-09-04T17:06:12.871485440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:06:12.871830 containerd[1545]: time="2024-09-04T17:06:12.871492400Z" level=info msg="Start subscribing containerd event" Sep 4 17:06:12.871830 containerd[1545]: time="2024-09-04T17:06:12.871815640Z" level=info msg="Start recovering state" Sep 4 17:06:12.871966 containerd[1545]: time="2024-09-04T17:06:12.871904960Z" level=info msg="Start event monitor" Sep 4 17:06:12.871966 containerd[1545]: time="2024-09-04T17:06:12.871937600Z" level=info msg="Start snapshots syncer" Sep 4 17:06:12.871966 containerd[1545]: time="2024-09-04T17:06:12.871949120Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:06:12.871966 containerd[1545]: time="2024-09-04T17:06:12.871957000Z" level=info msg="Start streaming server" Sep 4 17:06:12.872277 containerd[1545]: time="2024-09-04T17:06:12.872255400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:06:12.872387 containerd[1545]: time="2024-09-04T17:06:12.872371440Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:06:12.872585 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:06:12.874375 containerd[1545]: time="2024-09-04T17:06:12.874351080Z" level=info msg="containerd successfully booted in 0.054697s" Sep 4 17:06:12.987200 tar[1542]: linux-arm64/LICENSE Sep 4 17:06:12.987303 tar[1542]: linux-arm64/README.md Sep 4 17:06:13.003815 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:06:13.184357 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:06:13.203373 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:06:13.217452 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:06:13.222931 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:06:13.223220 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:06:13.226256 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:06:13.239593 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:06:13.242733 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:06:13.244940 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:06:13.246293 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:06:13.315865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:13.317467 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:06:13.319051 systemd[1]: Startup finished in 5.036s (kernel) + 3.187s (userspace) = 8.223s. Sep 4 17:06:13.319740 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:06:13.787318 kubelet[1655]: E0904 17:06:13.787230 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:06:13.789824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:06:13.789999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:06:18.938841 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:06:18.950380 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:46348.service - OpenSSH per-connection server daemon (10.0.0.1:46348). Sep 4 17:06:19.006623 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 46348 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:06:19.010159 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.025561 systemd-logind[1525]: New session 1 of user core. Sep 4 17:06:19.026572 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:06:19.035416 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:06:19.046901 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:06:19.049433 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:06:19.057115 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.142943 systemd[1675]: Queued start job for default target default.target. Sep 4 17:06:19.143337 systemd[1675]: Created slice app.slice - User Application Slice. Sep 4 17:06:19.143385 systemd[1675]: Reached target paths.target - Paths. Sep 4 17:06:19.143397 systemd[1675]: Reached target timers.target - Timers. Sep 4 17:06:19.153255 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:06:19.160099 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:06:19.160733 systemd[1675]: Reached target sockets.target - Sockets. Sep 4 17:06:19.160756 systemd[1675]: Reached target basic.target - Basic System. Sep 4 17:06:19.160809 systemd[1675]: Reached target default.target - Main User Target. Sep 4 17:06:19.160838 systemd[1675]: Startup finished in 97ms. Sep 4 17:06:19.161006 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:06:19.163662 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:06:19.237409 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:46352.service - OpenSSH per-connection server daemon (10.0.0.1:46352). Sep 4 17:06:19.273472 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 46352 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:06:19.274755 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.279384 systemd-logind[1525]: New session 2 of user core. Sep 4 17:06:19.294468 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:06:19.348798 sshd[1687]: pam_unix(sshd:session): session closed for user core Sep 4 17:06:19.359452 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:46354.service - OpenSSH per-connection server daemon (10.0.0.1:46354). Sep 4 17:06:19.359882 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:46352.service: Deactivated successfully. Sep 4 17:06:19.362242 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:06:19.363370 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:06:19.364526 systemd-logind[1525]: Removed session 2. Sep 4 17:06:19.395542 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 46354 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:06:19.397099 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.401531 systemd-logind[1525]: New session 3 of user core. Sep 4 17:06:19.414426 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:06:19.466004 sshd[1692]: pam_unix(sshd:session): session closed for user core Sep 4 17:06:19.483442 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:46366.service - OpenSSH per-connection server daemon (10.0.0.1:46366). Sep 4 17:06:19.484075 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:46354.service: Deactivated successfully. Sep 4 17:06:19.485686 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:06:19.486767 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:06:19.488070 systemd-logind[1525]: Removed session 3. Sep 4 17:06:19.521301 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 46366 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:06:19.522651 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.526931 systemd-logind[1525]: New session 4 of user core. Sep 4 17:06:19.538423 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:06:19.591813 sshd[1700]: pam_unix(sshd:session): session closed for user core Sep 4 17:06:19.603415 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:46370.service - OpenSSH per-connection server daemon (10.0.0.1:46370). Sep 4 17:06:19.603792 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:46366.service: Deactivated successfully. Sep 4 17:06:19.606352 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:06:19.607025 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:06:19.608277 systemd-logind[1525]: Removed session 4. Sep 4 17:06:19.637823 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 46370 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:06:19.639117 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.642727 systemd-logind[1525]: New session 5 of user core. Sep 4 17:06:19.654383 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:06:19.718375 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:06:19.718628 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:06:19.732017 sudo[1715]: pam_unix(sudo:session): session closed for user root Sep 4 17:06:19.733961 sshd[1708]: pam_unix(sshd:session): session closed for user core Sep 4 17:06:19.744428 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:46382.service - OpenSSH per-connection server daemon (10.0.0.1:46382). Sep 4 17:06:19.744821 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:46370.service: Deactivated successfully. Sep 4 17:06:19.747099 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:06:19.747559 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:06:19.749241 systemd-logind[1525]: Removed session 5. Sep 4 17:06:19.778768 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 46382 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:06:19.780097 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.783687 systemd-logind[1525]: New session 6 of user core. Sep 4 17:06:19.797437 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:06:19.849662 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:06:19.850265 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:06:19.853514 sudo[1725]: pam_unix(sudo:session): session closed for user root Sep 4 17:06:19.858400 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:06:19.858930 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:06:19.882409 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:06:19.884079 auditctl[1728]: No rules Sep 4 17:06:19.884973 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:06:19.885264 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:06:19.887161 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:06:19.911722 augenrules[1747]: No rules Sep 4 17:06:19.913284 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:06:19.914791 sudo[1724]: pam_unix(sudo:session): session closed for user root Sep 4 17:06:19.916670 sshd[1717]: pam_unix(sshd:session): session closed for user core Sep 4 17:06:19.927408 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:46384.service - OpenSSH per-connection server daemon (10.0.0.1:46384). Sep 4 17:06:19.927831 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:46382.service: Deactivated successfully. Sep 4 17:06:19.929887 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:06:19.930621 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:06:19.931924 systemd-logind[1525]: Removed session 6. Sep 4 17:06:19.966974 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 46384 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:06:19.968499 sshd[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:06:19.972677 systemd-logind[1525]: New session 7 of user core. Sep 4 17:06:19.984427 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:06:20.034821 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:06:20.035060 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:06:20.140410 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:06:20.140581 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:06:20.442009 dockerd[1771]: time="2024-09-04T17:06:20.441875221Z" level=info msg="Starting up" Sep 4 17:06:20.727038 dockerd[1771]: time="2024-09-04T17:06:20.726930940Z" level=info msg="Loading containers: start." Sep 4 17:06:20.816158 kernel: Initializing XFRM netlink socket Sep 4 17:06:20.881206 systemd-networkd[1238]: docker0: Link UP Sep 4 17:06:20.902170 dockerd[1771]: time="2024-09-04T17:06:20.901719362Z" level=info msg="Loading containers: done." Sep 4 17:06:20.975217 dockerd[1771]: time="2024-09-04T17:06:20.975162272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:06:20.975447 dockerd[1771]: time="2024-09-04T17:06:20.975384305Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:06:20.975733 dockerd[1771]: time="2024-09-04T17:06:20.975507639Z" level=info msg="Daemon has completed initialization" Sep 4 17:06:21.012430 dockerd[1771]: time="2024-09-04T17:06:21.011195187Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:06:21.012767 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:06:21.668774 containerd[1545]: time="2024-09-04T17:06:21.668716897Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:06:22.573370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031162685.mount: Deactivated successfully. Sep 4 17:06:23.872611 containerd[1545]: time="2024-09-04T17:06:23.871822635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:23.872611 containerd[1545]: time="2024-09-04T17:06:23.872144168Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599024" Sep 4 17:06:23.873655 containerd[1545]: time="2024-09-04T17:06:23.873621569Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:23.877233 containerd[1545]: time="2024-09-04T17:06:23.877186731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:23.879226 containerd[1545]: time="2024-09-04T17:06:23.879177415Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 2.21041745s" Sep 4 17:06:23.879226 containerd[1545]: time="2024-09-04T17:06:23.879226274Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"" Sep 4 17:06:23.899873 containerd[1545]: time="2024-09-04T17:06:23.899832532Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:06:24.040297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:06:24.047366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:06:24.137289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:24.141316 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:06:24.191996 kubelet[1985]: E0904 17:06:24.191935 1985 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:06:24.195171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:06:24.195310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:06:25.642855 containerd[1545]: time="2024-09-04T17:06:25.642790793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:25.643607 containerd[1545]: time="2024-09-04T17:06:25.643566079Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019498" Sep 4 17:06:25.644152 containerd[1545]: time="2024-09-04T17:06:25.644078276Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:25.646948 containerd[1545]: time="2024-09-04T17:06:25.646912126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:25.648329 containerd[1545]: time="2024-09-04T17:06:25.648296079Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 1.748302625s" Sep 4 17:06:25.648329 containerd[1545]: time="2024-09-04T17:06:25.648332696Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"" Sep 4 17:06:25.668450 containerd[1545]: time="2024-09-04T17:06:25.668412580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:06:26.871295 containerd[1545]: time="2024-09-04T17:06:26.870836707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:26.871872 containerd[1545]: time="2024-09-04T17:06:26.871794372Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533683" Sep 4 17:06:26.872812 containerd[1545]: time="2024-09-04T17:06:26.872758885Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:26.875970 containerd[1545]: time="2024-09-04T17:06:26.875930404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:26.878229 containerd[1545]: time="2024-09-04T17:06:26.878082575Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.209628091s" Sep 4 17:06:26.878229 containerd[1545]: time="2024-09-04T17:06:26.878133804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"" Sep 4 17:06:26.897152 containerd[1545]: time="2024-09-04T17:06:26.897088336Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:06:27.917430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount900284723.mount: Deactivated successfully. Sep 4 17:06:28.272932 containerd[1545]: time="2024-09-04T17:06:28.272791464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:28.273649 containerd[1545]: time="2024-09-04T17:06:28.273599506Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977932" Sep 4 17:06:28.274286 containerd[1545]: time="2024-09-04T17:06:28.274250225Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:28.276702 containerd[1545]: time="2024-09-04T17:06:28.276669068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:28.277537 containerd[1545]: time="2024-09-04T17:06:28.277489603Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 1.380075548s" Sep 4 17:06:28.277570 containerd[1545]: time="2024-09-04T17:06:28.277544300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"" Sep 4 17:06:28.297132 containerd[1545]: time="2024-09-04T17:06:28.297060734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:06:28.823342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810602158.mount: Deactivated successfully. Sep 4 17:06:28.829954 containerd[1545]: time="2024-09-04T17:06:28.829899843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:28.832731 containerd[1545]: time="2024-09-04T17:06:28.832677219Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:06:28.833557 containerd[1545]: time="2024-09-04T17:06:28.833516415Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:28.836964 containerd[1545]: time="2024-09-04T17:06:28.836920164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:28.838147 containerd[1545]: time="2024-09-04T17:06:28.837504294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 540.385578ms" Sep 4 17:06:28.838147 containerd[1545]: time="2024-09-04T17:06:28.837538169Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:06:28.858626 containerd[1545]: time="2024-09-04T17:06:28.858534707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:06:29.529053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546629343.mount: Deactivated successfully. Sep 4 17:06:31.610258 containerd[1545]: time="2024-09-04T17:06:31.610191435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:31.610894 containerd[1545]: time="2024-09-04T17:06:31.610856900Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788" Sep 4 17:06:31.611974 containerd[1545]: time="2024-09-04T17:06:31.611931251Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:31.615454 containerd[1545]: time="2024-09-04T17:06:31.615391750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:31.616914 containerd[1545]: time="2024-09-04T17:06:31.616828274Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.758254089s" Sep 4 17:06:31.616914 containerd[1545]: time="2024-09-04T17:06:31.616864579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:06:31.637662 containerd[1545]: time="2024-09-04T17:06:31.637605195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:06:32.183651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928440260.mount: Deactivated successfully. Sep 4 17:06:32.617604 containerd[1545]: time="2024-09-04T17:06:32.617441479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:32.623419 containerd[1545]: time="2024-09-04T17:06:32.623359939Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558464" Sep 4 17:06:32.628050 containerd[1545]: time="2024-09-04T17:06:32.628006060Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:32.635815 containerd[1545]: time="2024-09-04T17:06:32.635757401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:32.636785 containerd[1545]: time="2024-09-04T17:06:32.636698377Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 999.047951ms" Sep 4 17:06:32.636785 containerd[1545]: time="2024-09-04T17:06:32.636732638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Sep 4 17:06:34.393319 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:06:34.402330 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:06:34.493427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:34.498418 (kubelet)[2183]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:06:34.541264 kubelet[2183]: E0904 17:06:34.541202 2183 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:06:34.543567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:06:34.543710 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:06:36.875616 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:36.893312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:06:36.903651 systemd[1]: Reloading requested from client PID 2202 ('systemctl') (unit session-7.scope)... Sep 4 17:06:36.903667 systemd[1]: Reloading... Sep 4 17:06:36.968207 zram_generator::config[2242]: No configuration found. Sep 4 17:06:37.086232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:06:37.136916 systemd[1]: Reloading finished in 232 ms. Sep 4 17:06:37.170227 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:06:37.170292 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:06:37.170528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:37.172747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:06:37.261755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:37.265629 (kubelet)[2297]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:06:37.310167 kubelet[2297]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:06:37.310167 kubelet[2297]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:06:37.310167 kubelet[2297]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:06:37.310167 kubelet[2297]: I0904 17:06:37.309245 2297 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:06:38.040166 kubelet[2297]: I0904 17:06:38.039653 2297 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:06:38.040166 kubelet[2297]: I0904 17:06:38.039686 2297 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:06:38.040166 kubelet[2297]: I0904 17:06:38.039922 2297 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:06:38.087896 kubelet[2297]: I0904 17:06:38.087860 2297 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:06:38.092219 kubelet[2297]: E0904 17:06:38.092163 2297 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.102159 kubelet[2297]: W0904 17:06:38.101763 2297 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:06:38.102554 kubelet[2297]: I0904 17:06:38.102514 2297 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:06:38.104902 kubelet[2297]: I0904 17:06:38.104833 2297 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:06:38.105029 kubelet[2297]: I0904 17:06:38.105012 2297 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:06:38.105105 kubelet[2297]: I0904 17:06:38.105043 2297 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:06:38.105105 kubelet[2297]: I0904 17:06:38.105052 2297 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:06:38.105276 kubelet[2297]: I0904 17:06:38.105249 2297 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:06:38.106438 kubelet[2297]: I0904 17:06:38.106394 2297 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:06:38.106438 kubelet[2297]: I0904 17:06:38.106419 2297 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:06:38.106525 kubelet[2297]: I0904 17:06:38.106501 2297 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:06:38.106525 kubelet[2297]: I0904 17:06:38.106513 2297 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:06:38.107104 kubelet[2297]: W0904 17:06:38.107027 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.107104 kubelet[2297]: E0904 17:06:38.107078 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.109498 kubelet[2297]: W0904 17:06:38.109429 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.109498 kubelet[2297]: E0904 17:06:38.109480 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.110502 kubelet[2297]: I0904 17:06:38.110483 2297 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:06:38.113681 kubelet[2297]: W0904 17:06:38.113472 2297 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:06:38.114201 kubelet[2297]: I0904 17:06:38.114106 2297 server.go:1232] "Started kubelet" Sep 4 17:06:38.114421 kubelet[2297]: I0904 17:06:38.114365 2297 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:06:38.114484 kubelet[2297]: I0904 17:06:38.114427 2297 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:06:38.114803 kubelet[2297]: I0904 17:06:38.114690 2297 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:06:38.115207 kubelet[2297]: E0904 17:06:38.115190 2297 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:06:38.115334 kubelet[2297]: E0904 17:06:38.115286 2297 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:06:38.116208 kubelet[2297]: I0904 17:06:38.115759 2297 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:06:38.116438 kubelet[2297]: I0904 17:06:38.116418 2297 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:06:38.117789 kubelet[2297]: E0904 17:06:38.117756 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:38.117789 kubelet[2297]: I0904 17:06:38.117787 2297 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:06:38.117895 kubelet[2297]: I0904 17:06:38.117883 2297 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:06:38.117963 kubelet[2297]: I0904 17:06:38.117954 2297 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:06:38.118308 kubelet[2297]: W0904 17:06:38.118274 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.118340 kubelet[2297]: E0904 17:06:38.118317 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.120932 kubelet[2297]: E0904 17:06:38.118945 2297 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Sep 4 17:06:38.122281 kubelet[2297]: E0904 17:06:38.122160 2297 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17f2197522535d7f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 6, 38, 114078079, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 6, 38, 114078079, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.15:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.15:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:06:38.134795 kubelet[2297]: I0904 17:06:38.133022 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:06:38.134795 kubelet[2297]: I0904 17:06:38.134017 2297 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:06:38.134795 kubelet[2297]: I0904 17:06:38.134035 2297 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:06:38.134795 kubelet[2297]: I0904 17:06:38.134054 2297 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:06:38.134795 kubelet[2297]: E0904 17:06:38.134111 2297 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:06:38.143436 kubelet[2297]: W0904 17:06:38.140200 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.143436 kubelet[2297]: E0904 17:06:38.140250 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:38.163497 kubelet[2297]: I0904 17:06:38.163469 2297 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:06:38.163497 kubelet[2297]: I0904 17:06:38.163493 2297 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:06:38.163683 kubelet[2297]: I0904 17:06:38.163511 2297 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:06:38.165247 kubelet[2297]: I0904 17:06:38.165219 2297 policy_none.go:49] "None policy: Start" Sep 4 17:06:38.165917 kubelet[2297]: I0904 17:06:38.165895 2297 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:06:38.165967 kubelet[2297]: I0904 17:06:38.165927 2297 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:06:38.170573 kubelet[2297]: I0904 17:06:38.170538 2297 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:06:38.171787 kubelet[2297]: I0904 17:06:38.170794 2297 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:06:38.171787 kubelet[2297]: E0904 17:06:38.171778 2297 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:06:38.219233 kubelet[2297]: I0904 17:06:38.219195 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:06:38.219667 kubelet[2297]: E0904 17:06:38.219647 2297 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 4 17:06:38.234971 kubelet[2297]: I0904 17:06:38.234876 2297 topology_manager.go:215] "Topology Admit Handler" podUID="6e531c247f5375c50bae6e50394b0a84" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:06:38.235984 kubelet[2297]: I0904 17:06:38.235965 2297 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:06:38.236816 kubelet[2297]: I0904 17:06:38.236772 2297 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:06:38.319946 kubelet[2297]: E0904 17:06:38.319829 2297 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Sep 4 17:06:38.419383 kubelet[2297]: I0904 17:06:38.419317 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:38.419383 kubelet[2297]: I0904 17:06:38.419361 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:38.419383 kubelet[2297]: I0904 17:06:38.419388 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:38.419536 kubelet[2297]: I0904 17:06:38.419484 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e531c247f5375c50bae6e50394b0a84-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e531c247f5375c50bae6e50394b0a84\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:06:38.419536 kubelet[2297]: I0904 17:06:38.419521 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e531c247f5375c50bae6e50394b0a84-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e531c247f5375c50bae6e50394b0a84\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:06:38.419607 kubelet[2297]: I0904 17:06:38.419546 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e531c247f5375c50bae6e50394b0a84-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e531c247f5375c50bae6e50394b0a84\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:06:38.419607 kubelet[2297]: I0904 17:06:38.419572 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:38.419607 kubelet[2297]: I0904 17:06:38.419601 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:38.419667 kubelet[2297]: I0904 17:06:38.419622 2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:06:38.421433 kubelet[2297]: I0904 17:06:38.421395 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:06:38.421968 kubelet[2297]: E0904 17:06:38.421931 2297 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 4 17:06:38.541368 kubelet[2297]: E0904 17:06:38.541320 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:38.541467 kubelet[2297]: E0904 17:06:38.541322 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:38.541623 kubelet[2297]: E0904 17:06:38.541591 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:38.542032 containerd[1545]: time="2024-09-04T17:06:38.541974370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e531c247f5375c50bae6e50394b0a84,Namespace:kube-system,Attempt:0,}" Sep 4 17:06:38.542604 containerd[1545]: time="2024-09-04T17:06:38.541981212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,}" Sep 4 17:06:38.542604 containerd[1545]: time="2024-09-04T17:06:38.542112848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,}" Sep 4 17:06:38.721076 kubelet[2297]: E0904 17:06:38.721045 2297 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Sep 4 17:06:38.823430 kubelet[2297]: I0904 17:06:38.823389 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:06:38.823762 kubelet[2297]: E0904 17:06:38.823728 2297 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 4 17:06:39.008772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679366302.mount: Deactivated successfully. Sep 4 17:06:39.014371 containerd[1545]: time="2024-09-04T17:06:39.014299773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:06:39.015353 containerd[1545]: time="2024-09-04T17:06:39.015312337Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:06:39.016390 containerd[1545]: time="2024-09-04T17:06:39.016338863Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:06:39.017643 containerd[1545]: time="2024-09-04T17:06:39.017601327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:06:39.018302 containerd[1545]: time="2024-09-04T17:06:39.018144497Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:06:39.018790 containerd[1545]: time="2024-09-04T17:06:39.018760525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:06:39.019463 containerd[1545]: time="2024-09-04T17:06:39.019408041Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:06:39.023920 containerd[1545]: time="2024-09-04T17:06:39.023868952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:06:39.025199 containerd[1545]: time="2024-09-04T17:06:39.025165584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.025889ms" Sep 4 17:06:39.025994 containerd[1545]: time="2024-09-04T17:06:39.025939090Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.738258ms" Sep 4 17:06:39.027905 containerd[1545]: time="2024-09-04T17:06:39.027868033Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 485.800758ms" Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196470662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196560844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196575608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196585010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196581769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196648385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196662509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:06:39.196737 containerd[1545]: time="2024-09-04T17:06:39.196730845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:39.201366 containerd[1545]: time="2024-09-04T17:06:39.201255292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:06:39.201366 containerd[1545]: time="2024-09-04T17:06:39.201311185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:39.201366 containerd[1545]: time="2024-09-04T17:06:39.201324989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:06:39.201366 containerd[1545]: time="2024-09-04T17:06:39.201341313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:39.246062 kubelet[2297]: W0904 17:06:39.245988 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:39.246062 kubelet[2297]: E0904 17:06:39.246062 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:39.252960 containerd[1545]: time="2024-09-04T17:06:39.252540134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6e531c247f5375c50bae6e50394b0a84,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f1fdc5e10824ec49c3c219328087ed8a46656c2de226004f595fad090437650\"" Sep 4 17:06:39.252960 containerd[1545]: time="2024-09-04T17:06:39.252822882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f5bf8d52acd7337c82951a97b42c345d,Namespace:kube-system,Attempt:0,} returns sandbox id \"36d3bff9b29379366f3d8000566bdaa2082a4ac4ba6b1bfe986bfcd89f249d7b\"" Sep 4 17:06:39.254202 kubelet[2297]: E0904 17:06:39.254172 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:39.254381 kubelet[2297]: E0904 17:06:39.254244 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:39.254441 containerd[1545]: time="2024-09-04T17:06:39.254173406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:cacd2a680dbc59f99275412e0ba6e38b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0a5448a91d442f4ad637bc60dfc9444da92a88ea3da9202e3c40b999dd8e843\"" Sep 4 17:06:39.254805 kubelet[2297]: E0904 17:06:39.254758 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:39.257080 containerd[1545]: time="2024-09-04T17:06:39.256808119Z" level=info msg="CreateContainer within sandbox \"e0a5448a91d442f4ad637bc60dfc9444da92a88ea3da9202e3c40b999dd8e843\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:06:39.257080 containerd[1545]: time="2024-09-04T17:06:39.256987402Z" level=info msg="CreateContainer within sandbox \"36d3bff9b29379366f3d8000566bdaa2082a4ac4ba6b1bfe986bfcd89f249d7b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:06:39.257750 containerd[1545]: time="2024-09-04T17:06:39.257706655Z" level=info msg="CreateContainer within sandbox \"0f1fdc5e10824ec49c3c219328087ed8a46656c2de226004f595fad090437650\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:06:39.272826 containerd[1545]: time="2024-09-04T17:06:39.272711540Z" level=info msg="CreateContainer within sandbox \"e0a5448a91d442f4ad637bc60dfc9444da92a88ea3da9202e3c40b999dd8e843\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5e59b01ada8ece0ade2fc9cdd454f4b20d56d6bf749e351a1f9afc0945bbbf1f\"" Sep 4 17:06:39.273602 containerd[1545]: time="2024-09-04T17:06:39.273567506Z" level=info msg="StartContainer for \"5e59b01ada8ece0ade2fc9cdd454f4b20d56d6bf749e351a1f9afc0945bbbf1f\"" Sep 4 17:06:39.277592 containerd[1545]: time="2024-09-04T17:06:39.277493329Z" level=info msg="CreateContainer within sandbox \"0f1fdc5e10824ec49c3c219328087ed8a46656c2de226004f595fad090437650\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ff852f140d2675e75ad3be9c5f48212eae4cebdc157078627515b6b8e0f91b9\"" Sep 4 17:06:39.278183 containerd[1545]: time="2024-09-04T17:06:39.278099955Z" level=info msg="StartContainer for \"5ff852f140d2675e75ad3be9c5f48212eae4cebdc157078627515b6b8e0f91b9\"" Sep 4 17:06:39.278565 containerd[1545]: time="2024-09-04T17:06:39.278512334Z" level=info msg="CreateContainer within sandbox \"36d3bff9b29379366f3d8000566bdaa2082a4ac4ba6b1bfe986bfcd89f249d7b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"286d442c39bc6a27420e88299674eebd913bb5cfbed018db16b114c4643eced5\"" Sep 4 17:06:39.279063 containerd[1545]: time="2024-09-04T17:06:39.279019096Z" level=info msg="StartContainer for \"286d442c39bc6a27420e88299674eebd913bb5cfbed018db16b114c4643eced5\"" Sep 4 17:06:39.340749 containerd[1545]: time="2024-09-04T17:06:39.340609814Z" level=info msg="StartContainer for \"5ff852f140d2675e75ad3be9c5f48212eae4cebdc157078627515b6b8e0f91b9\" returns successfully" Sep 4 17:06:39.370648 containerd[1545]: time="2024-09-04T17:06:39.370599659Z" level=info msg="StartContainer for \"286d442c39bc6a27420e88299674eebd913bb5cfbed018db16b114c4643eced5\" returns successfully" Sep 4 17:06:39.370972 containerd[1545]: time="2024-09-04T17:06:39.370867844Z" level=info msg="StartContainer for \"5e59b01ada8ece0ade2fc9cdd454f4b20d56d6bf749e351a1f9afc0945bbbf1f\" returns successfully" Sep 4 17:06:39.451368 kubelet[2297]: W0904 17:06:39.451302 2297 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:39.451368 kubelet[2297]: E0904 17:06:39.451367 2297 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.15:6443: connect: connection refused Sep 4 17:06:39.523291 kubelet[2297]: E0904 17:06:39.523179 2297 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Sep 4 17:06:39.625350 kubelet[2297]: I0904 17:06:39.625257 2297 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:06:40.153356 kubelet[2297]: E0904 17:06:40.153323 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:40.157160 kubelet[2297]: E0904 17:06:40.155413 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:40.157160 kubelet[2297]: E0904 17:06:40.157018 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:41.117962 kubelet[2297]: I0904 17:06:41.117872 2297 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:06:41.129034 kubelet[2297]: E0904 17:06:41.129003 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.159089 kubelet[2297]: E0904 17:06:41.159034 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:41.229955 kubelet[2297]: E0904 17:06:41.229909 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.330459 kubelet[2297]: E0904 17:06:41.330416 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.431578 kubelet[2297]: E0904 17:06:41.431520 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.532086 kubelet[2297]: E0904 17:06:41.532056 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.632819 kubelet[2297]: E0904 17:06:41.632788 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.733609 kubelet[2297]: E0904 17:06:41.733496 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.834111 kubelet[2297]: E0904 17:06:41.834066 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:41.934758 kubelet[2297]: E0904 17:06:41.934714 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:42.035382 kubelet[2297]: E0904 17:06:42.035256 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:42.071311 kubelet[2297]: E0904 17:06:42.071285 2297 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:42.135770 kubelet[2297]: E0904 17:06:42.135733 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:42.236902 kubelet[2297]: E0904 17:06:42.236862 2297 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:43.109941 kubelet[2297]: I0904 17:06:43.109886 2297 apiserver.go:52] "Watching apiserver" Sep 4 17:06:43.118634 kubelet[2297]: I0904 17:06:43.118554 2297 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:06:43.875090 systemd[1]: Reloading requested from client PID 2577 ('systemctl') (unit session-7.scope)... Sep 4 17:06:43.875104 systemd[1]: Reloading... Sep 4 17:06:43.929267 zram_generator::config[2617]: No configuration found. Sep 4 17:06:44.018838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:06:44.078003 systemd[1]: Reloading finished in 202 ms. Sep 4 17:06:44.109821 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:06:44.118152 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:06:44.118575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:44.135394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:06:44.227861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:06:44.233896 (kubelet)[2666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:06:44.282044 kubelet[2666]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:06:44.282044 kubelet[2666]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:06:44.282044 kubelet[2666]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:06:44.282402 kubelet[2666]: I0904 17:06:44.282091 2666 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:06:44.286327 kubelet[2666]: I0904 17:06:44.286293 2666 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:06:44.286327 kubelet[2666]: I0904 17:06:44.286325 2666 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:06:44.286491 kubelet[2666]: I0904 17:06:44.286475 2666 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:06:44.287983 kubelet[2666]: I0904 17:06:44.287955 2666 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:06:44.291609 kubelet[2666]: I0904 17:06:44.290777 2666 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:06:44.295061 kubelet[2666]: W0904 17:06:44.295032 2666 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:06:44.295857 kubelet[2666]: I0904 17:06:44.295837 2666 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:06:44.296307 kubelet[2666]: I0904 17:06:44.296280 2666 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:06:44.296464 kubelet[2666]: I0904 17:06:44.296440 2666 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:06:44.296546 kubelet[2666]: I0904 17:06:44.296472 2666 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:06:44.296546 kubelet[2666]: I0904 17:06:44.296482 2666 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:06:44.296546 kubelet[2666]: I0904 17:06:44.296515 2666 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:06:44.296627 kubelet[2666]: I0904 17:06:44.296609 2666 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:06:44.296627 kubelet[2666]: I0904 17:06:44.296623 2666 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:06:44.296668 kubelet[2666]: I0904 17:06:44.296642 2666 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:06:44.296668 kubelet[2666]: I0904 17:06:44.296653 2666 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:06:44.299360 kubelet[2666]: I0904 17:06:44.297421 2666 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:06:44.299360 kubelet[2666]: I0904 17:06:44.297879 2666 server.go:1232] "Started kubelet" Sep 4 17:06:44.299360 kubelet[2666]: I0904 17:06:44.298663 2666 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:06:44.299360 kubelet[2666]: I0904 17:06:44.298881 2666 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:06:44.299360 kubelet[2666]: I0904 17:06:44.299110 2666 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:06:44.300322 kubelet[2666]: E0904 17:06:44.300303 2666 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:06:44.300419 kubelet[2666]: E0904 17:06:44.300409 2666 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:06:44.301790 kubelet[2666]: I0904 17:06:44.301774 2666 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:06:44.302966 kubelet[2666]: I0904 17:06:44.302950 2666 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:06:44.304860 kubelet[2666]: I0904 17:06:44.303891 2666 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:06:44.304860 kubelet[2666]: I0904 17:06:44.304380 2666 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:06:44.304860 kubelet[2666]: I0904 17:06:44.304566 2666 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:06:44.305273 kubelet[2666]: E0904 17:06:44.305249 2666 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:06:44.343299 kubelet[2666]: I0904 17:06:44.343266 2666 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:06:44.347494 kubelet[2666]: I0904 17:06:44.347467 2666 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:06:44.347976 kubelet[2666]: I0904 17:06:44.347960 2666 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:06:44.348098 kubelet[2666]: I0904 17:06:44.348081 2666 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:06:44.348242 kubelet[2666]: E0904 17:06:44.348228 2666 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:06:44.401912 kubelet[2666]: I0904 17:06:44.401884 2666 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:06:44.402063 kubelet[2666]: I0904 17:06:44.402051 2666 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:06:44.402145 kubelet[2666]: I0904 17:06:44.402115 2666 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:06:44.402349 kubelet[2666]: I0904 17:06:44.402335 2666 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:06:44.402433 kubelet[2666]: I0904 17:06:44.402423 2666 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:06:44.402488 kubelet[2666]: I0904 17:06:44.402480 2666 policy_none.go:49] "None policy: Start" Sep 4 17:06:44.403296 kubelet[2666]: I0904 17:06:44.403265 2666 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:06:44.403362 kubelet[2666]: I0904 17:06:44.403306 2666 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:06:44.403513 kubelet[2666]: I0904 17:06:44.403494 2666 state_mem.go:75] "Updated machine memory state" Sep 4 17:06:44.404584 kubelet[2666]: I0904 17:06:44.404530 2666 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:06:44.406761 kubelet[2666]: I0904 17:06:44.406477 2666 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:06:44.409943 kubelet[2666]: I0904 17:06:44.409398 2666 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Sep 4 17:06:44.446807 kubelet[2666]: I0904 17:06:44.446764 2666 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Sep 4 17:06:44.446925 kubelet[2666]: I0904 17:06:44.446863 2666 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Sep 4 17:06:44.448549 kubelet[2666]: I0904 17:06:44.448492 2666 topology_manager.go:215] "Topology Admit Handler" podUID="f5bf8d52acd7337c82951a97b42c345d" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:06:44.448701 kubelet[2666]: I0904 17:06:44.448685 2666 topology_manager.go:215] "Topology Admit Handler" podUID="cacd2a680dbc59f99275412e0ba6e38b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:06:44.448747 kubelet[2666]: I0904 17:06:44.448730 2666 topology_manager.go:215] "Topology Admit Handler" podUID="6e531c247f5375c50bae6e50394b0a84" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:06:44.506271 kubelet[2666]: I0904 17:06:44.506208 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:44.506271 kubelet[2666]: I0904 17:06:44.506250 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:44.506422 kubelet[2666]: I0904 17:06:44.506322 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:44.506422 kubelet[2666]: I0904 17:06:44.506375 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:44.506422 kubelet[2666]: I0904 17:06:44.506394 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5bf8d52acd7337c82951a97b42c345d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f5bf8d52acd7337c82951a97b42c345d\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:06:44.506485 kubelet[2666]: I0904 17:06:44.506428 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6e531c247f5375c50bae6e50394b0a84-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6e531c247f5375c50bae6e50394b0a84\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:06:44.506485 kubelet[2666]: I0904 17:06:44.506451 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cacd2a680dbc59f99275412e0ba6e38b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"cacd2a680dbc59f99275412e0ba6e38b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:06:44.506485 kubelet[2666]: I0904 17:06:44.506470 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6e531c247f5375c50bae6e50394b0a84-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e531c247f5375c50bae6e50394b0a84\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:06:44.506545 kubelet[2666]: I0904 17:06:44.506500 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6e531c247f5375c50bae6e50394b0a84-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6e531c247f5375c50bae6e50394b0a84\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:06:44.763339 kubelet[2666]: E0904 17:06:44.762794 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:44.763339 kubelet[2666]: E0904 17:06:44.763230 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:44.764880 kubelet[2666]: E0904 17:06:44.764849 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:45.298277 kubelet[2666]: I0904 17:06:45.298225 2666 apiserver.go:52] "Watching apiserver" Sep 4 17:06:45.304730 kubelet[2666]: I0904 17:06:45.304700 2666 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:06:45.361703 kubelet[2666]: E0904 17:06:45.361678 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:45.364197 kubelet[2666]: E0904 17:06:45.363296 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:45.365601 kubelet[2666]: E0904 17:06:45.365076 2666 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 4 17:06:45.366165 kubelet[2666]: E0904 17:06:45.366146 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:45.405872 kubelet[2666]: I0904 17:06:45.405823 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.40576687 podCreationTimestamp="2024-09-04 17:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:06:45.394115654 +0000 UTC m=+1.156036194" watchObservedRunningTime="2024-09-04 17:06:45.40576687 +0000 UTC m=+1.167687450" Sep 4 17:06:45.415908 kubelet[2666]: I0904 17:06:45.414972 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.41493917 podCreationTimestamp="2024-09-04 17:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:06:45.40624754 +0000 UTC m=+1.168168120" watchObservedRunningTime="2024-09-04 17:06:45.41493917 +0000 UTC m=+1.176859750" Sep 4 17:06:45.415908 kubelet[2666]: I0904 17:06:45.415042 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.415025935 podCreationTimestamp="2024-09-04 17:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:06:45.414230725 +0000 UTC m=+1.176151305" watchObservedRunningTime="2024-09-04 17:06:45.415025935 +0000 UTC m=+1.176946515" Sep 4 17:06:46.366136 kubelet[2666]: E0904 17:06:46.366091 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:46.946117 kubelet[2666]: E0904 17:06:46.946079 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:47.396249 kubelet[2666]: E0904 17:06:47.396221 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:48.097399 kubelet[2666]: E0904 17:06:48.097359 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:49.439656 sudo[1760]: pam_unix(sudo:session): session closed for user root Sep 4 17:06:49.460839 sshd[1753]: pam_unix(sshd:session): session closed for user core Sep 4 17:06:49.463777 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:46384.service: Deactivated successfully. Sep 4 17:06:49.466996 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:06:49.467413 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:06:49.469136 systemd-logind[1525]: Removed session 7. Sep 4 17:06:55.953011 kubelet[2666]: I0904 17:06:55.951426 2666 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:06:55.953011 kubelet[2666]: I0904 17:06:55.952063 2666 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:06:55.953381 containerd[1545]: time="2024-09-04T17:06:55.951746181Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:06:56.775180 kubelet[2666]: I0904 17:06:56.775140 2666 topology_manager.go:215] "Topology Admit Handler" podUID="f0ec0f22-fb8d-4275-8e61-e54d9e3182dc" podNamespace="kube-system" podName="kube-proxy-55jmq" Sep 4 17:06:56.786473 kubelet[2666]: I0904 17:06:56.786418 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nw26\" (UniqueName: \"kubernetes.io/projected/f0ec0f22-fb8d-4275-8e61-e54d9e3182dc-kube-api-access-8nw26\") pod \"kube-proxy-55jmq\" (UID: \"f0ec0f22-fb8d-4275-8e61-e54d9e3182dc\") " pod="kube-system/kube-proxy-55jmq" Sep 4 17:06:56.786473 kubelet[2666]: I0904 17:06:56.786470 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0ec0f22-fb8d-4275-8e61-e54d9e3182dc-kube-proxy\") pod \"kube-proxy-55jmq\" (UID: \"f0ec0f22-fb8d-4275-8e61-e54d9e3182dc\") " pod="kube-system/kube-proxy-55jmq" Sep 4 17:06:56.786645 kubelet[2666]: I0904 17:06:56.786491 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0ec0f22-fb8d-4275-8e61-e54d9e3182dc-xtables-lock\") pod \"kube-proxy-55jmq\" (UID: \"f0ec0f22-fb8d-4275-8e61-e54d9e3182dc\") " pod="kube-system/kube-proxy-55jmq" Sep 4 17:06:56.786645 kubelet[2666]: I0904 17:06:56.786510 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0ec0f22-fb8d-4275-8e61-e54d9e3182dc-lib-modules\") pod \"kube-proxy-55jmq\" (UID: \"f0ec0f22-fb8d-4275-8e61-e54d9e3182dc\") " pod="kube-system/kube-proxy-55jmq" Sep 4 17:06:56.952648 kubelet[2666]: E0904 17:06:56.952602 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:56.994571 kubelet[2666]: I0904 17:06:56.994513 2666 topology_manager.go:215] "Topology Admit Handler" podUID="5748f10e-e458-4510-8883-787aa1c9c37f" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-nl6nl" Sep 4 17:06:57.080316 kubelet[2666]: E0904 17:06:57.080174 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:57.085669 containerd[1545]: time="2024-09-04T17:06:57.085621417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55jmq,Uid:f0ec0f22-fb8d-4275-8e61-e54d9e3182dc,Namespace:kube-system,Attempt:0,}" Sep 4 17:06:57.088299 kubelet[2666]: I0904 17:06:57.088216 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5748f10e-e458-4510-8883-787aa1c9c37f-var-lib-calico\") pod \"tigera-operator-5d56685c77-nl6nl\" (UID: \"5748f10e-e458-4510-8883-787aa1c9c37f\") " pod="tigera-operator/tigera-operator-5d56685c77-nl6nl" Sep 4 17:06:57.088299 kubelet[2666]: I0904 17:06:57.088270 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q88dp\" (UniqueName: \"kubernetes.io/projected/5748f10e-e458-4510-8883-787aa1c9c37f-kube-api-access-q88dp\") pod \"tigera-operator-5d56685c77-nl6nl\" (UID: \"5748f10e-e458-4510-8883-787aa1c9c37f\") " pod="tigera-operator/tigera-operator-5d56685c77-nl6nl" Sep 4 17:06:57.104799 containerd[1545]: time="2024-09-04T17:06:57.104699725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:06:57.104799 containerd[1545]: time="2024-09-04T17:06:57.104762127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:57.104799 containerd[1545]: time="2024-09-04T17:06:57.104777768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:06:57.104799 containerd[1545]: time="2024-09-04T17:06:57.104788168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:57.134919 containerd[1545]: time="2024-09-04T17:06:57.134870078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-55jmq,Uid:f0ec0f22-fb8d-4275-8e61-e54d9e3182dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"239a5f63e3ee7cccde0e55c504c928ae5ec20207b650be9027274f80b301fe83\"" Sep 4 17:06:57.135718 kubelet[2666]: E0904 17:06:57.135692 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:57.139541 containerd[1545]: time="2024-09-04T17:06:57.139508711Z" level=info msg="CreateContainer within sandbox \"239a5f63e3ee7cccde0e55c504c928ae5ec20207b650be9027274f80b301fe83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:06:57.162396 containerd[1545]: time="2024-09-04T17:06:57.162261580Z" level=info msg="CreateContainer within sandbox \"239a5f63e3ee7cccde0e55c504c928ae5ec20207b650be9027274f80b301fe83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e05c77a74c5fa9129d1381106c63e5fde240d63e5ab3deb1a01a6158a9766c75\"" Sep 4 17:06:57.162880 containerd[1545]: time="2024-09-04T17:06:57.162845320Z" level=info msg="StartContainer for \"e05c77a74c5fa9129d1381106c63e5fde240d63e5ab3deb1a01a6158a9766c75\"" Sep 4 17:06:57.214769 containerd[1545]: time="2024-09-04T17:06:57.211369717Z" level=info msg="StartContainer for \"e05c77a74c5fa9129d1381106c63e5fde240d63e5ab3deb1a01a6158a9766c75\" returns successfully" Sep 4 17:06:57.297246 containerd[1545]: time="2024-09-04T17:06:57.297061899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-nl6nl,Uid:5748f10e-e458-4510-8883-787aa1c9c37f,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:06:57.328936 containerd[1545]: time="2024-09-04T17:06:57.328465933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:06:57.328936 containerd[1545]: time="2024-09-04T17:06:57.328555376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:57.328936 containerd[1545]: time="2024-09-04T17:06:57.328580816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:06:57.329634 containerd[1545]: time="2024-09-04T17:06:57.329519727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:06:57.373001 containerd[1545]: time="2024-09-04T17:06:57.372894395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-nl6nl,Uid:5748f10e-e458-4510-8883-787aa1c9c37f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6d9eed004f6cb7015bbee962b1086fd1946c293298c01729aef994994332d8a5\"" Sep 4 17:06:57.375850 containerd[1545]: time="2024-09-04T17:06:57.374798098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:06:57.393476 kubelet[2666]: E0904 17:06:57.393429 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:57.405365 kubelet[2666]: E0904 17:06:57.405314 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:57.413178 kubelet[2666]: I0904 17:06:57.412999 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-55jmq" podStartSLOduration=1.4129627550000001 podCreationTimestamp="2024-09-04 17:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:06:57.401854029 +0000 UTC m=+13.163774609" watchObservedRunningTime="2024-09-04 17:06:57.412962755 +0000 UTC m=+13.174883335" Sep 4 17:06:58.104448 kubelet[2666]: E0904 17:06:58.104405 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:06:58.108985 update_engine[1534]: I0904 17:06:58.104844 1534 update_attempter.cc:509] Updating boot flags... Sep 4 17:06:58.137287 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2950) Sep 4 17:06:58.176628 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (3004) Sep 4 17:06:58.872628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330095097.mount: Deactivated successfully. Sep 4 17:06:59.218999 containerd[1545]: time="2024-09-04T17:06:59.218586305Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:59.219547 containerd[1545]: time="2024-09-04T17:06:59.219021517Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485911" Sep 4 17:06:59.219948 containerd[1545]: time="2024-09-04T17:06:59.219921384Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:59.222156 containerd[1545]: time="2024-09-04T17:06:59.222107129Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:06:59.223069 containerd[1545]: time="2024-09-04T17:06:59.223031557Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.848199098s" Sep 4 17:06:59.223126 containerd[1545]: time="2024-09-04T17:06:59.223069358Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:06:59.227082 containerd[1545]: time="2024-09-04T17:06:59.227049277Z" level=info msg="CreateContainer within sandbox \"6d9eed004f6cb7015bbee962b1086fd1946c293298c01729aef994994332d8a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:06:59.249433 containerd[1545]: time="2024-09-04T17:06:59.249376942Z" level=info msg="CreateContainer within sandbox \"6d9eed004f6cb7015bbee962b1086fd1946c293298c01729aef994994332d8a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d64289977bc921ad48371b2e2e52b5fb0bdce369a68a1fb20c3e4f6ce218206c\"" Sep 4 17:06:59.249800 containerd[1545]: time="2024-09-04T17:06:59.249765793Z" level=info msg="StartContainer for \"d64289977bc921ad48371b2e2e52b5fb0bdce369a68a1fb20c3e4f6ce218206c\"" Sep 4 17:06:59.354521 containerd[1545]: time="2024-09-04T17:06:59.354410231Z" level=info msg="StartContainer for \"d64289977bc921ad48371b2e2e52b5fb0bdce369a68a1fb20c3e4f6ce218206c\" returns successfully" Sep 4 17:06:59.420175 kubelet[2666]: I0904 17:06:59.420084 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-nl6nl" podStartSLOduration=1.568702193 podCreationTimestamp="2024-09-04 17:06:56 +0000 UTC" firstStartedPulling="2024-09-04 17:06:57.374379404 +0000 UTC m=+13.136299984" lastFinishedPulling="2024-09-04 17:06:59.225720477 +0000 UTC m=+14.987641057" observedRunningTime="2024-09-04 17:06:59.419991104 +0000 UTC m=+15.181911684" watchObservedRunningTime="2024-09-04 17:06:59.420043266 +0000 UTC m=+15.181963846" Sep 4 17:07:03.157688 kubelet[2666]: I0904 17:07:03.157344 2666 topology_manager.go:215] "Topology Admit Handler" podUID="26f688f5-f41e-4a90-8a86-cc5f01dd7199" podNamespace="calico-system" podName="calico-typha-f5977b847-qmlwn" Sep 4 17:07:03.205624 kubelet[2666]: I0904 17:07:03.204771 2666 topology_manager.go:215] "Topology Admit Handler" podUID="c4421213-a737-4a28-8be1-bfc870ccfc6d" podNamespace="calico-system" podName="calico-node-jrswn" Sep 4 17:07:03.230956 kubelet[2666]: I0904 17:07:03.230904 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-bin-dir\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.230956 kubelet[2666]: I0904 17:07:03.230959 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4421213-a737-4a28-8be1-bfc870ccfc6d-node-certs\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231147 kubelet[2666]: I0904 17:07:03.230983 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-policysync\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231147 kubelet[2666]: I0904 17:07:03.231005 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-log-dir\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231147 kubelet[2666]: I0904 17:07:03.231032 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-net-dir\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231147 kubelet[2666]: I0904 17:07:03.231058 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26f688f5-f41e-4a90-8a86-cc5f01dd7199-tigera-ca-bundle\") pod \"calico-typha-f5977b847-qmlwn\" (UID: \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\") " pod="calico-system/calico-typha-f5977b847-qmlwn" Sep 4 17:07:03.231147 kubelet[2666]: I0904 17:07:03.231078 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-lib-modules\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231370 kubelet[2666]: I0904 17:07:03.231101 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-xtables-lock\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231370 kubelet[2666]: I0904 17:07:03.231142 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-flexvol-driver-host\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231370 kubelet[2666]: I0904 17:07:03.231184 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/26f688f5-f41e-4a90-8a86-cc5f01dd7199-typha-certs\") pod \"calico-typha-f5977b847-qmlwn\" (UID: \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\") " pod="calico-system/calico-typha-f5977b847-qmlwn" Sep 4 17:07:03.231370 kubelet[2666]: I0904 17:07:03.231210 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fjtc\" (UniqueName: \"kubernetes.io/projected/26f688f5-f41e-4a90-8a86-cc5f01dd7199-kube-api-access-6fjtc\") pod \"calico-typha-f5977b847-qmlwn\" (UID: \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\") " pod="calico-system/calico-typha-f5977b847-qmlwn" Sep 4 17:07:03.231370 kubelet[2666]: I0904 17:07:03.231233 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4421213-a737-4a28-8be1-bfc870ccfc6d-tigera-ca-bundle\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231514 kubelet[2666]: I0904 17:07:03.231287 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-lib-calico\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231514 kubelet[2666]: I0904 17:07:03.231339 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbt2\" (UniqueName: \"kubernetes.io/projected/c4421213-a737-4a28-8be1-bfc870ccfc6d-kube-api-access-4gbt2\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.231514 kubelet[2666]: I0904 17:07:03.231390 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-run-calico\") pod \"calico-node-jrswn\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " pod="calico-system/calico-node-jrswn" Sep 4 17:07:03.318999 kubelet[2666]: I0904 17:07:03.318946 2666 topology_manager.go:215] "Topology Admit Handler" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" podNamespace="calico-system" podName="csi-node-driver-tbgrz" Sep 4 17:07:03.320988 kubelet[2666]: E0904 17:07:03.319600 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgrz" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" Sep 4 17:07:03.333339 kubelet[2666]: I0904 17:07:03.333299 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3bc96ff6-744d-455a-9a38-773fca98cdc6-socket-dir\") pod \"csi-node-driver-tbgrz\" (UID: \"3bc96ff6-744d-455a-9a38-773fca98cdc6\") " pod="calico-system/csi-node-driver-tbgrz" Sep 4 17:07:03.335504 kubelet[2666]: I0904 17:07:03.333800 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bc96ff6-744d-455a-9a38-773fca98cdc6-kubelet-dir\") pod \"csi-node-driver-tbgrz\" (UID: \"3bc96ff6-744d-455a-9a38-773fca98cdc6\") " pod="calico-system/csi-node-driver-tbgrz" Sep 4 17:07:03.335504 kubelet[2666]: I0904 17:07:03.333836 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8s5s\" (UniqueName: \"kubernetes.io/projected/3bc96ff6-744d-455a-9a38-773fca98cdc6-kube-api-access-q8s5s\") pod \"csi-node-driver-tbgrz\" (UID: \"3bc96ff6-744d-455a-9a38-773fca98cdc6\") " pod="calico-system/csi-node-driver-tbgrz" Sep 4 17:07:03.335504 kubelet[2666]: I0904 17:07:03.334010 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3bc96ff6-744d-455a-9a38-773fca98cdc6-registration-dir\") pod \"csi-node-driver-tbgrz\" (UID: \"3bc96ff6-744d-455a-9a38-773fca98cdc6\") " pod="calico-system/csi-node-driver-tbgrz" Sep 4 17:07:03.335504 kubelet[2666]: I0904 17:07:03.334058 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3bc96ff6-744d-455a-9a38-773fca98cdc6-varrun\") pod \"csi-node-driver-tbgrz\" (UID: \"3bc96ff6-744d-455a-9a38-773fca98cdc6\") " pod="calico-system/csi-node-driver-tbgrz" Sep 4 17:07:03.343909 kubelet[2666]: E0904 17:07:03.343860 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.343909 kubelet[2666]: W0904 17:07:03.343903 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.345798 kubelet[2666]: E0904 17:07:03.344961 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.345798 kubelet[2666]: W0904 17:07:03.345004 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.350882 kubelet[2666]: E0904 17:07:03.350835 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.358139 kubelet[2666]: E0904 17:07:03.351112 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.358139 kubelet[2666]: E0904 17:07:03.352247 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.358139 kubelet[2666]: W0904 17:07:03.352271 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.358139 kubelet[2666]: E0904 17:07:03.352333 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.358139 kubelet[2666]: E0904 17:07:03.354666 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.358139 kubelet[2666]: W0904 17:07:03.354691 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.358139 kubelet[2666]: E0904 17:07:03.354715 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.358139 kubelet[2666]: E0904 17:07:03.355625 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.358139 kubelet[2666]: W0904 17:07:03.355640 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.358139 kubelet[2666]: E0904 17:07:03.355661 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.358582 kubelet[2666]: E0904 17:07:03.356722 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.358582 kubelet[2666]: W0904 17:07:03.356736 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.358582 kubelet[2666]: E0904 17:07:03.358064 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.358582 kubelet[2666]: W0904 17:07:03.358082 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.358582 kubelet[2666]: E0904 17:07:03.358245 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.358582 kubelet[2666]: E0904 17:07:03.358538 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.358835 kubelet[2666]: E0904 17:07:03.358656 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.358835 kubelet[2666]: W0904 17:07:03.358663 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.358835 kubelet[2666]: E0904 17:07:03.358699 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.367448 kubelet[2666]: E0904 17:07:03.362398 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.367448 kubelet[2666]: W0904 17:07:03.362418 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.367448 kubelet[2666]: E0904 17:07:03.362435 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.369101 kubelet[2666]: E0904 17:07:03.369081 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.371137 kubelet[2666]: W0904 17:07:03.371090 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.371286 kubelet[2666]: E0904 17:07:03.371259 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.435237 kubelet[2666]: E0904 17:07:03.435137 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.435386 kubelet[2666]: W0904 17:07:03.435369 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.435451 kubelet[2666]: E0904 17:07:03.435440 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.435799 kubelet[2666]: E0904 17:07:03.435785 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.435888 kubelet[2666]: W0904 17:07:03.435876 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.435974 kubelet[2666]: E0904 17:07:03.435965 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.436183 kubelet[2666]: E0904 17:07:03.436161 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.436183 kubelet[2666]: W0904 17:07:03.436178 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.436294 kubelet[2666]: E0904 17:07:03.436204 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.436588 kubelet[2666]: E0904 17:07:03.436558 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.436806 kubelet[2666]: W0904 17:07:03.436572 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.436839 kubelet[2666]: E0904 17:07:03.436812 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.437291 kubelet[2666]: E0904 17:07:03.437271 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.437291 kubelet[2666]: W0904 17:07:03.437289 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.437410 kubelet[2666]: E0904 17:07:03.437309 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.437858 kubelet[2666]: E0904 17:07:03.437826 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.437858 kubelet[2666]: W0904 17:07:03.437840 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.437948 kubelet[2666]: E0904 17:07:03.437875 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.438019 kubelet[2666]: E0904 17:07:03.438007 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.438019 kubelet[2666]: W0904 17:07:03.438017 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.438096 kubelet[2666]: E0904 17:07:03.438065 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.438225 kubelet[2666]: E0904 17:07:03.438213 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.438225 kubelet[2666]: W0904 17:07:03.438224 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.438301 kubelet[2666]: E0904 17:07:03.438242 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.438452 kubelet[2666]: E0904 17:07:03.438442 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.438452 kubelet[2666]: W0904 17:07:03.438451 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.438530 kubelet[2666]: E0904 17:07:03.438467 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.438632 kubelet[2666]: E0904 17:07:03.438619 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.438632 kubelet[2666]: W0904 17:07:03.438629 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.438697 kubelet[2666]: E0904 17:07:03.438640 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.438810 kubelet[2666]: E0904 17:07:03.438799 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.438810 kubelet[2666]: W0904 17:07:03.438809 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.438874 kubelet[2666]: E0904 17:07:03.438824 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.438979 kubelet[2666]: E0904 17:07:03.438968 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.438979 kubelet[2666]: W0904 17:07:03.438978 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.439041 kubelet[2666]: E0904 17:07:03.438991 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.439175 kubelet[2666]: E0904 17:07:03.439163 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.439175 kubelet[2666]: W0904 17:07:03.439173 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.439238 kubelet[2666]: E0904 17:07:03.439188 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.439457 kubelet[2666]: E0904 17:07:03.439365 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.439457 kubelet[2666]: W0904 17:07:03.439375 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.439457 kubelet[2666]: E0904 17:07:03.439410 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.439580 kubelet[2666]: E0904 17:07:03.439522 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.439580 kubelet[2666]: W0904 17:07:03.439529 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.439580 kubelet[2666]: E0904 17:07:03.439560 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.439676 kubelet[2666]: E0904 17:07:03.439662 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.439676 kubelet[2666]: W0904 17:07:03.439669 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.439734 kubelet[2666]: E0904 17:07:03.439683 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.439850 kubelet[2666]: E0904 17:07:03.439836 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.439850 kubelet[2666]: W0904 17:07:03.439848 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.439932 kubelet[2666]: E0904 17:07:03.439863 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.440007 kubelet[2666]: E0904 17:07:03.439996 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.440007 kubelet[2666]: W0904 17:07:03.440005 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.440077 kubelet[2666]: E0904 17:07:03.440015 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.440205 kubelet[2666]: E0904 17:07:03.440195 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.440205 kubelet[2666]: W0904 17:07:03.440206 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.440277 kubelet[2666]: E0904 17:07:03.440223 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.441666 kubelet[2666]: E0904 17:07:03.441633 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.441666 kubelet[2666]: W0904 17:07:03.441651 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.441760 kubelet[2666]: E0904 17:07:03.441676 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.441905 kubelet[2666]: E0904 17:07:03.441893 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.441905 kubelet[2666]: W0904 17:07:03.441905 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.441973 kubelet[2666]: E0904 17:07:03.441933 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.442060 kubelet[2666]: E0904 17:07:03.442050 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.442060 kubelet[2666]: W0904 17:07:03.442060 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.442111 kubelet[2666]: E0904 17:07:03.442076 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.442313 kubelet[2666]: E0904 17:07:03.442300 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.442313 kubelet[2666]: W0904 17:07:03.442312 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.442381 kubelet[2666]: E0904 17:07:03.442329 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.442657 kubelet[2666]: E0904 17:07:03.442638 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.442696 kubelet[2666]: W0904 17:07:03.442657 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.442696 kubelet[2666]: E0904 17:07:03.442681 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.442908 kubelet[2666]: E0904 17:07:03.442892 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.442908 kubelet[2666]: W0904 17:07:03.442907 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.442959 kubelet[2666]: E0904 17:07:03.442918 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.456554 kubelet[2666]: E0904 17:07:03.456471 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:03.456554 kubelet[2666]: W0904 17:07:03.456492 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:03.456554 kubelet[2666]: E0904 17:07:03.456512 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:03.462186 kubelet[2666]: E0904 17:07:03.462090 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:03.469521 containerd[1545]: time="2024-09-04T17:07:03.467988925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5977b847-qmlwn,Uid:26f688f5-f41e-4a90-8a86-cc5f01dd7199,Namespace:calico-system,Attempt:0,}" Sep 4 17:07:03.488221 containerd[1545]: time="2024-09-04T17:07:03.487980337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:03.488221 containerd[1545]: time="2024-09-04T17:07:03.488042378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:03.488221 containerd[1545]: time="2024-09-04T17:07:03.488063619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:03.488221 containerd[1545]: time="2024-09-04T17:07:03.488088260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:03.511391 kubelet[2666]: E0904 17:07:03.511356 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:03.511870 containerd[1545]: time="2024-09-04T17:07:03.511824804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jrswn,Uid:c4421213-a737-4a28-8be1-bfc870ccfc6d,Namespace:calico-system,Attempt:0,}" Sep 4 17:07:03.536750 containerd[1545]: time="2024-09-04T17:07:03.536687536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f5977b847-qmlwn,Uid:26f688f5-f41e-4a90-8a86-cc5f01dd7199,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\"" Sep 4 17:07:03.538055 kubelet[2666]: E0904 17:07:03.537450 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:03.540711 containerd[1545]: time="2024-09-04T17:07:03.540650033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:07:03.544441 containerd[1545]: time="2024-09-04T17:07:03.543606626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:03.544441 containerd[1545]: time="2024-09-04T17:07:03.543779830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:03.544441 containerd[1545]: time="2024-09-04T17:07:03.543808791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:03.544441 containerd[1545]: time="2024-09-04T17:07:03.543823231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:03.574802 containerd[1545]: time="2024-09-04T17:07:03.574731832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jrswn,Uid:c4421213-a737-4a28-8be1-bfc870ccfc6d,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\"" Sep 4 17:07:03.575575 kubelet[2666]: E0904 17:07:03.575556 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:04.977247 containerd[1545]: time="2024-09-04T17:07:04.977198873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:04.978339 containerd[1545]: time="2024-09-04T17:07:04.978151255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:07:04.979232 containerd[1545]: time="2024-09-04T17:07:04.978992195Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:04.982633 containerd[1545]: time="2024-09-04T17:07:04.982585919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.441866204s" Sep 4 17:07:04.982633 containerd[1545]: time="2024-09-04T17:07:04.982626400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:07:04.982868 containerd[1545]: time="2024-09-04T17:07:04.982823845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:04.983546 containerd[1545]: time="2024-09-04T17:07:04.983333057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:07:04.993778 containerd[1545]: time="2024-09-04T17:07:04.993720501Z" level=info msg="CreateContainer within sandbox \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:07:05.006375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409255956.mount: Deactivated successfully. Sep 4 17:07:05.007848 containerd[1545]: time="2024-09-04T17:07:05.007771744Z" level=info msg="CreateContainer within sandbox \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\"" Sep 4 17:07:05.012405 containerd[1545]: time="2024-09-04T17:07:05.011942678Z" level=info msg="StartContainer for \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\"" Sep 4 17:07:05.069149 containerd[1545]: time="2024-09-04T17:07:05.069087283Z" level=info msg="StartContainer for \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\" returns successfully" Sep 4 17:07:05.349861 kubelet[2666]: E0904 17:07:05.349750 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgrz" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" Sep 4 17:07:05.437077 containerd[1545]: time="2024-09-04T17:07:05.436999193Z" level=info msg="StopContainer for \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\" with timeout 300 (s)" Sep 4 17:07:05.442618 containerd[1545]: time="2024-09-04T17:07:05.442574319Z" level=info msg="Stop container \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\" with signal terminated" Sep 4 17:07:05.486946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679-rootfs.mount: Deactivated successfully. Sep 4 17:07:05.489084 containerd[1545]: time="2024-09-04T17:07:05.489016683Z" level=info msg="shim disconnected" id=bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679 namespace=k8s.io Sep 4 17:07:05.489084 containerd[1545]: time="2024-09-04T17:07:05.489073684Z" level=warning msg="cleaning up after shim disconnected" id=bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679 namespace=k8s.io Sep 4 17:07:05.489084 containerd[1545]: time="2024-09-04T17:07:05.489083244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:07:05.507012 containerd[1545]: time="2024-09-04T17:07:05.506952326Z" level=info msg="StopContainer for \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\" returns successfully" Sep 4 17:07:05.509946 containerd[1545]: time="2024-09-04T17:07:05.509899272Z" level=info msg="StopPodSandbox for \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\"" Sep 4 17:07:05.517246 containerd[1545]: time="2024-09-04T17:07:05.514484575Z" level=info msg="Container to stop \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:07:05.521075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb-shm.mount: Deactivated successfully. Sep 4 17:07:05.546685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb-rootfs.mount: Deactivated successfully. Sep 4 17:07:05.553078 containerd[1545]: time="2024-09-04T17:07:05.552863398Z" level=info msg="shim disconnected" id=4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb namespace=k8s.io Sep 4 17:07:05.553078 containerd[1545]: time="2024-09-04T17:07:05.552916719Z" level=warning msg="cleaning up after shim disconnected" id=4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb namespace=k8s.io Sep 4 17:07:05.553078 containerd[1545]: time="2024-09-04T17:07:05.552925359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:07:05.564640 containerd[1545]: time="2024-09-04T17:07:05.564463939Z" level=info msg="TearDown network for sandbox \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" successfully" Sep 4 17:07:05.564640 containerd[1545]: time="2024-09-04T17:07:05.564502180Z" level=info msg="StopPodSandbox for \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" returns successfully" Sep 4 17:07:05.581934 kubelet[2666]: I0904 17:07:05.581897 2666 topology_manager.go:215] "Topology Admit Handler" podUID="dd3c2081-8090-4f13-a4a6-a8032df72039" podNamespace="calico-system" podName="calico-typha-6855df4df8-bvk9j" Sep 4 17:07:05.582115 kubelet[2666]: E0904 17:07:05.581978 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26f688f5-f41e-4a90-8a86-cc5f01dd7199" containerName="calico-typha" Sep 4 17:07:05.582115 kubelet[2666]: I0904 17:07:05.582004 2666 memory_manager.go:346] "RemoveStaleState removing state" podUID="26f688f5-f41e-4a90-8a86-cc5f01dd7199" containerName="calico-typha" Sep 4 17:07:05.647575 kubelet[2666]: E0904 17:07:05.647539 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.647575 kubelet[2666]: W0904 17:07:05.647560 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.647575 kubelet[2666]: E0904 17:07:05.647584 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.647943 kubelet[2666]: E0904 17:07:05.647726 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.647943 kubelet[2666]: W0904 17:07:05.647732 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.647943 kubelet[2666]: E0904 17:07:05.647743 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.647943 kubelet[2666]: E0904 17:07:05.647856 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.647943 kubelet[2666]: W0904 17:07:05.647862 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.647943 kubelet[2666]: E0904 17:07:05.647872 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.648088 kubelet[2666]: E0904 17:07:05.648024 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.648088 kubelet[2666]: W0904 17:07:05.648031 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.648088 kubelet[2666]: E0904 17:07:05.648040 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.648630 kubelet[2666]: E0904 17:07:05.648577 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.648630 kubelet[2666]: W0904 17:07:05.648596 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.648630 kubelet[2666]: E0904 17:07:05.648612 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.648821 kubelet[2666]: E0904 17:07:05.648803 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.648821 kubelet[2666]: W0904 17:07:05.648815 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.648870 kubelet[2666]: E0904 17:07:05.648828 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.648987 kubelet[2666]: E0904 17:07:05.648975 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.648987 kubelet[2666]: W0904 17:07:05.648986 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.649041 kubelet[2666]: E0904 17:07:05.648997 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.649160 kubelet[2666]: E0904 17:07:05.649150 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.649160 kubelet[2666]: W0904 17:07:05.649159 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.649262 kubelet[2666]: E0904 17:07:05.649169 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.649337 kubelet[2666]: E0904 17:07:05.649322 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.649337 kubelet[2666]: W0904 17:07:05.649332 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.649392 kubelet[2666]: E0904 17:07:05.649344 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.649477 kubelet[2666]: E0904 17:07:05.649467 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.649477 kubelet[2666]: W0904 17:07:05.649475 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.649590 kubelet[2666]: E0904 17:07:05.649485 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.649623 kubelet[2666]: E0904 17:07:05.649598 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.649623 kubelet[2666]: W0904 17:07:05.649605 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.649623 kubelet[2666]: E0904 17:07:05.649614 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.649740 kubelet[2666]: E0904 17:07:05.649727 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.649740 kubelet[2666]: W0904 17:07:05.649736 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.649791 kubelet[2666]: E0904 17:07:05.649745 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.670098 kubelet[2666]: E0904 17:07:05.670077 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.670098 kubelet[2666]: W0904 17:07:05.670100 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.670296 kubelet[2666]: E0904 17:07:05.670213 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.670296 kubelet[2666]: I0904 17:07:05.670261 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fjtc\" (UniqueName: \"kubernetes.io/projected/26f688f5-f41e-4a90-8a86-cc5f01dd7199-kube-api-access-6fjtc\") pod \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\" (UID: \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\") " Sep 4 17:07:05.670539 kubelet[2666]: E0904 17:07:05.670521 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.670539 kubelet[2666]: W0904 17:07:05.670539 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.670609 kubelet[2666]: E0904 17:07:05.670557 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.670609 kubelet[2666]: I0904 17:07:05.670580 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26f688f5-f41e-4a90-8a86-cc5f01dd7199-tigera-ca-bundle\") pod \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\" (UID: \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\") " Sep 4 17:07:05.670922 kubelet[2666]: E0904 17:07:05.670756 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.670922 kubelet[2666]: W0904 17:07:05.670769 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.670922 kubelet[2666]: E0904 17:07:05.670792 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.670922 kubelet[2666]: I0904 17:07:05.670814 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/26f688f5-f41e-4a90-8a86-cc5f01dd7199-typha-certs\") pod \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\" (UID: \"26f688f5-f41e-4a90-8a86-cc5f01dd7199\") " Sep 4 17:07:05.671035 kubelet[2666]: E0904 17:07:05.671013 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.671035 kubelet[2666]: W0904 17:07:05.671021 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.671228 kubelet[2666]: E0904 17:07:05.671217 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.671312 kubelet[2666]: I0904 17:07:05.671271 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bddsl\" (UniqueName: \"kubernetes.io/projected/dd3c2081-8090-4f13-a4a6-a8032df72039-kube-api-access-bddsl\") pod \"calico-typha-6855df4df8-bvk9j\" (UID: \"dd3c2081-8090-4f13-a4a6-a8032df72039\") " pod="calico-system/calico-typha-6855df4df8-bvk9j" Sep 4 17:07:05.674684 kubelet[2666]: E0904 17:07:05.674443 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.674684 kubelet[2666]: W0904 17:07:05.674461 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.674684 kubelet[2666]: E0904 17:07:05.674507 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.674568 systemd[1]: var-lib-kubelet-pods-26f688f5\x2df41e\x2d4a90\x2d8a86\x2dcc5f01dd7199-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Sep 4 17:07:05.675367 kubelet[2666]: E0904 17:07:05.675341 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.675367 kubelet[2666]: W0904 17:07:05.675358 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.675460 kubelet[2666]: E0904 17:07:05.675391 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.675589 kubelet[2666]: E0904 17:07:05.675556 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.675589 kubelet[2666]: W0904 17:07:05.675574 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.675896 kubelet[2666]: E0904 17:07:05.675668 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.675896 kubelet[2666]: I0904 17:07:05.675711 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26f688f5-f41e-4a90-8a86-cc5f01dd7199-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "26f688f5-f41e-4a90-8a86-cc5f01dd7199" (UID: "26f688f5-f41e-4a90-8a86-cc5f01dd7199"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:07:05.675896 kubelet[2666]: E0904 17:07:05.675766 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.675896 kubelet[2666]: W0904 17:07:05.675775 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.675896 kubelet[2666]: E0904 17:07:05.675819 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.676045 kubelet[2666]: E0904 17:07:05.675972 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.676045 kubelet[2666]: W0904 17:07:05.675982 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.676045 kubelet[2666]: E0904 17:07:05.676003 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.676045 kubelet[2666]: I0904 17:07:05.676028 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd3c2081-8090-4f13-a4a6-a8032df72039-tigera-ca-bundle\") pod \"calico-typha-6855df4df8-bvk9j\" (UID: \"dd3c2081-8090-4f13-a4a6-a8032df72039\") " pod="calico-system/calico-typha-6855df4df8-bvk9j" Sep 4 17:07:05.676687 kubelet[2666]: I0904 17:07:05.676643 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f688f5-f41e-4a90-8a86-cc5f01dd7199-kube-api-access-6fjtc" (OuterVolumeSpecName: "kube-api-access-6fjtc") pod "26f688f5-f41e-4a90-8a86-cc5f01dd7199" (UID: "26f688f5-f41e-4a90-8a86-cc5f01dd7199"). InnerVolumeSpecName "kube-api-access-6fjtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:07:05.677228 kubelet[2666]: E0904 17:07:05.677206 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.677228 kubelet[2666]: W0904 17:07:05.677225 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.677294 kubelet[2666]: E0904 17:07:05.677241 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.677491 kubelet[2666]: E0904 17:07:05.677474 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.677491 kubelet[2666]: W0904 17:07:05.677488 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.677554 kubelet[2666]: E0904 17:07:05.677511 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.677554 kubelet[2666]: I0904 17:07:05.677533 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/dd3c2081-8090-4f13-a4a6-a8032df72039-typha-certs\") pod \"calico-typha-6855df4df8-bvk9j\" (UID: \"dd3c2081-8090-4f13-a4a6-a8032df72039\") " pod="calico-system/calico-typha-6855df4df8-bvk9j" Sep 4 17:07:05.677671 kubelet[2666]: I0904 17:07:05.677659 2666 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26f688f5-f41e-4a90-8a86-cc5f01dd7199-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:05.677702 kubelet[2666]: I0904 17:07:05.677678 2666 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6fjtc\" (UniqueName: \"kubernetes.io/projected/26f688f5-f41e-4a90-8a86-cc5f01dd7199-kube-api-access-6fjtc\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:05.677772 kubelet[2666]: E0904 17:07:05.677760 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.677796 kubelet[2666]: W0904 17:07:05.677773 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.677796 kubelet[2666]: E0904 17:07:05.677786 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.677968 kubelet[2666]: E0904 17:07:05.677947 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.677968 kubelet[2666]: W0904 17:07:05.677962 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.678028 kubelet[2666]: E0904 17:07:05.677973 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.678028 kubelet[2666]: I0904 17:07:05.677941 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f688f5-f41e-4a90-8a86-cc5f01dd7199-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "26f688f5-f41e-4a90-8a86-cc5f01dd7199" (UID: "26f688f5-f41e-4a90-8a86-cc5f01dd7199"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:07:05.678160 kubelet[2666]: E0904 17:07:05.678148 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.678160 kubelet[2666]: W0904 17:07:05.678158 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.678223 kubelet[2666]: E0904 17:07:05.678170 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.678353 kubelet[2666]: E0904 17:07:05.678340 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.678353 kubelet[2666]: W0904 17:07:05.678351 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.678398 kubelet[2666]: E0904 17:07:05.678361 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.778501 kubelet[2666]: E0904 17:07:05.778466 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.778501 kubelet[2666]: W0904 17:07:05.778490 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.778501 kubelet[2666]: E0904 17:07:05.778511 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.779498 kubelet[2666]: E0904 17:07:05.778753 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.779498 kubelet[2666]: W0904 17:07:05.778766 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.779498 kubelet[2666]: E0904 17:07:05.778784 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.779498 kubelet[2666]: E0904 17:07:05.778968 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.779498 kubelet[2666]: W0904 17:07:05.778976 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.779498 kubelet[2666]: E0904 17:07:05.778990 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.779498 kubelet[2666]: I0904 17:07:05.779026 2666 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/26f688f5-f41e-4a90-8a86-cc5f01dd7199-typha-certs\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:05.779498 kubelet[2666]: E0904 17:07:05.779191 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.779498 kubelet[2666]: W0904 17:07:05.779198 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.779498 kubelet[2666]: E0904 17:07:05.779211 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.781077 kubelet[2666]: E0904 17:07:05.779372 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.781077 kubelet[2666]: W0904 17:07:05.779379 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.781077 kubelet[2666]: E0904 17:07:05.779389 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.781077 kubelet[2666]: E0904 17:07:05.779530 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.781077 kubelet[2666]: W0904 17:07:05.779538 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.781077 kubelet[2666]: E0904 17:07:05.779553 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.781077 kubelet[2666]: E0904 17:07:05.779716 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.781077 kubelet[2666]: W0904 17:07:05.779723 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.781077 kubelet[2666]: E0904 17:07:05.779733 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.783055 kubelet[2666]: E0904 17:07:05.781748 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.783055 kubelet[2666]: W0904 17:07:05.781763 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.783055 kubelet[2666]: E0904 17:07:05.781786 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785029 kubelet[2666]: E0904 17:07:05.783184 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785029 kubelet[2666]: W0904 17:07:05.783200 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785029 kubelet[2666]: E0904 17:07:05.783240 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785029 kubelet[2666]: E0904 17:07:05.783394 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785029 kubelet[2666]: W0904 17:07:05.783403 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785029 kubelet[2666]: E0904 17:07:05.783420 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785029 kubelet[2666]: E0904 17:07:05.783610 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785029 kubelet[2666]: W0904 17:07:05.783618 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785029 kubelet[2666]: E0904 17:07:05.783633 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785029 kubelet[2666]: E0904 17:07:05.783778 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785251 kubelet[2666]: W0904 17:07:05.783786 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785251 kubelet[2666]: E0904 17:07:05.783800 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785251 kubelet[2666]: E0904 17:07:05.783959 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785251 kubelet[2666]: W0904 17:07:05.783967 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785251 kubelet[2666]: E0904 17:07:05.783982 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785251 kubelet[2666]: E0904 17:07:05.784219 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785251 kubelet[2666]: W0904 17:07:05.784230 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785251 kubelet[2666]: E0904 17:07:05.784244 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785251 kubelet[2666]: E0904 17:07:05.784453 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785251 kubelet[2666]: W0904 17:07:05.784465 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785464 kubelet[2666]: E0904 17:07:05.784477 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.785464 kubelet[2666]: E0904 17:07:05.785107 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.785464 kubelet[2666]: W0904 17:07:05.785128 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.785464 kubelet[2666]: E0904 17:07:05.785140 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.787166 kubelet[2666]: E0904 17:07:05.786614 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.787166 kubelet[2666]: W0904 17:07:05.786629 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.787166 kubelet[2666]: E0904 17:07:05.786642 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.790193 kubelet[2666]: E0904 17:07:05.790167 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:07:05.790193 kubelet[2666]: W0904 17:07:05.790185 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:07:05.790277 kubelet[2666]: E0904 17:07:05.790201 2666 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:07:05.888450 kubelet[2666]: E0904 17:07:05.888406 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:05.888978 containerd[1545]: time="2024-09-04T17:07:05.888916952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6855df4df8-bvk9j,Uid:dd3c2081-8090-4f13-a4a6-a8032df72039,Namespace:calico-system,Attempt:0,}" Sep 4 17:07:05.915927 containerd[1545]: time="2024-09-04T17:07:05.915492150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:05.916029 containerd[1545]: time="2024-09-04T17:07:05.915843478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:05.916029 containerd[1545]: time="2024-09-04T17:07:05.915875838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:05.916083 containerd[1545]: time="2024-09-04T17:07:05.915890999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:05.964116 containerd[1545]: time="2024-09-04T17:07:05.963923238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6855df4df8-bvk9j,Uid:dd3c2081-8090-4f13-a4a6-a8032df72039,Namespace:calico-system,Attempt:0,} returns sandbox id \"4acc0f8c0b8f7ec35f2f7881aa9c59e8382cba022ac4a74ee88c708a47ba35da\"" Sep 4 17:07:05.964821 kubelet[2666]: E0904 17:07:05.964795 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:05.980626 containerd[1545]: time="2024-09-04T17:07:05.980555612Z" level=info msg="CreateContainer within sandbox \"4acc0f8c0b8f7ec35f2f7881aa9c59e8382cba022ac4a74ee88c708a47ba35da\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:07:05.996377 containerd[1545]: time="2024-09-04T17:07:05.996320687Z" level=info msg="CreateContainer within sandbox \"4acc0f8c0b8f7ec35f2f7881aa9c59e8382cba022ac4a74ee88c708a47ba35da\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e6414c87780981c5432c38b537d0830e72f5893a27b7e06a3c6e568ba8c1238f\"" Sep 4 17:07:05.998155 containerd[1545]: time="2024-09-04T17:07:05.998096807Z" level=info msg="StartContainer for \"e6414c87780981c5432c38b537d0830e72f5893a27b7e06a3c6e568ba8c1238f\"" Sep 4 17:07:06.077954 containerd[1545]: time="2024-09-04T17:07:06.077878126Z" level=info msg="StartContainer for \"e6414c87780981c5432c38b537d0830e72f5893a27b7e06a3c6e568ba8c1238f\" returns successfully" Sep 4 17:07:06.216892 containerd[1545]: time="2024-09-04T17:07:06.216300944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:06.217356 containerd[1545]: time="2024-09-04T17:07:06.217324326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:07:06.218485 containerd[1545]: time="2024-09-04T17:07:06.218419670Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:06.224745 containerd[1545]: time="2024-09-04T17:07:06.224698205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:06.225690 containerd[1545]: time="2024-09-04T17:07:06.225597424Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.242219046s" Sep 4 17:07:06.225690 containerd[1545]: time="2024-09-04T17:07:06.225635665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:07:06.228606 containerd[1545]: time="2024-09-04T17:07:06.228559288Z" level=info msg="CreateContainer within sandbox \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:07:06.244801 containerd[1545]: time="2024-09-04T17:07:06.244675314Z" level=info msg="CreateContainer within sandbox \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96\"" Sep 4 17:07:06.245243 containerd[1545]: time="2024-09-04T17:07:06.245216686Z" level=info msg="StartContainer for \"48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96\"" Sep 4 17:07:06.299626 containerd[1545]: time="2024-09-04T17:07:06.298033862Z" level=info msg="StartContainer for \"48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96\" returns successfully" Sep 4 17:07:06.353628 systemd[1]: var-lib-kubelet-pods-26f688f5\x2df41e\x2d4a90\x2d8a86\x2dcc5f01dd7199-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6fjtc.mount: Deactivated successfully. Sep 4 17:07:06.353786 systemd[1]: var-lib-kubelet-pods-26f688f5\x2df41e\x2d4a90\x2d8a86\x2dcc5f01dd7199-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Sep 4 17:07:06.367294 containerd[1545]: time="2024-09-04T17:07:06.367227551Z" level=info msg="shim disconnected" id=48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96 namespace=k8s.io Sep 4 17:07:06.367294 containerd[1545]: time="2024-09-04T17:07:06.367290832Z" level=warning msg="cleaning up after shim disconnected" id=48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96 namespace=k8s.io Sep 4 17:07:06.367477 containerd[1545]: time="2024-09-04T17:07:06.367304993Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:07:06.440707 kubelet[2666]: I0904 17:07:06.440680 2666 scope.go:117] "RemoveContainer" containerID="bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679" Sep 4 17:07:06.443208 containerd[1545]: time="2024-09-04T17:07:06.443139024Z" level=info msg="RemoveContainer for \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\"" Sep 4 17:07:06.446291 kubelet[2666]: E0904 17:07:06.446263 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:06.447178 containerd[1545]: time="2024-09-04T17:07:06.446759222Z" level=info msg="StopPodSandbox for \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\"" Sep 4 17:07:06.447178 containerd[1545]: time="2024-09-04T17:07:06.446811463Z" level=info msg="Container to stop \"48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:07:06.455108 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408-shm.mount: Deactivated successfully. Sep 4 17:07:06.482064 containerd[1545]: time="2024-09-04T17:07:06.479238161Z" level=info msg="RemoveContainer for \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\" returns successfully" Sep 4 17:07:06.482198 kubelet[2666]: I0904 17:07:06.479597 2666 scope.go:117] "RemoveContainer" containerID="bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679" Sep 4 17:07:06.494950 containerd[1545]: time="2024-09-04T17:07:06.482854639Z" level=error msg="ContainerStatus for \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\": not found" Sep 4 17:07:06.495081 kubelet[2666]: E0904 17:07:06.493303 2666 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\": not found" containerID="bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679" Sep 4 17:07:06.495081 kubelet[2666]: I0904 17:07:06.493368 2666 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679"} err="failed to get container status \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\": rpc error: code = NotFound desc = an error occurred when try to find container \"bbc9a740d96809cae59d5864fe720097a2e5ea60e559df79ee007234109ec679\": not found" Sep 4 17:07:06.508568 kubelet[2666]: I0904 17:07:06.508474 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6855df4df8-bvk9j" podStartSLOduration=3.5084345089999998 podCreationTimestamp="2024-09-04 17:07:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:07:06.485340092 +0000 UTC m=+22.247260672" watchObservedRunningTime="2024-09-04 17:07:06.508434509 +0000 UTC m=+22.270355049" Sep 4 17:07:06.521268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408-rootfs.mount: Deactivated successfully. Sep 4 17:07:06.526665 containerd[1545]: time="2024-09-04T17:07:06.526285813Z" level=info msg="shim disconnected" id=fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408 namespace=k8s.io Sep 4 17:07:06.526665 containerd[1545]: time="2024-09-04T17:07:06.526431736Z" level=warning msg="cleaning up after shim disconnected" id=fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408 namespace=k8s.io Sep 4 17:07:06.526665 containerd[1545]: time="2024-09-04T17:07:06.526444056Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:07:06.540271 containerd[1545]: time="2024-09-04T17:07:06.540220753Z" level=info msg="TearDown network for sandbox \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" successfully" Sep 4 17:07:06.540271 containerd[1545]: time="2024-09-04T17:07:06.540261354Z" level=info msg="StopPodSandbox for \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" returns successfully" Sep 4 17:07:06.614894 kubelet[2666]: I0904 17:07:06.614242 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4421213-a737-4a28-8be1-bfc870ccfc6d-node-certs\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.614894 kubelet[2666]: I0904 17:07:06.614285 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-xtables-lock\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.614894 kubelet[2666]: I0904 17:07:06.614308 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-run-calico\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.614894 kubelet[2666]: I0904 17:07:06.614334 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-log-dir\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.614894 kubelet[2666]: I0904 17:07:06.614354 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-flexvol-driver-host\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.614894 kubelet[2666]: I0904 17:07:06.614375 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-lib-calico\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.615231 kubelet[2666]: I0904 17:07:06.614398 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4421213-a737-4a28-8be1-bfc870ccfc6d-tigera-ca-bundle\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.615231 kubelet[2666]: I0904 17:07:06.614414 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-policysync\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.615231 kubelet[2666]: I0904 17:07:06.614431 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-lib-modules\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.615231 kubelet[2666]: I0904 17:07:06.614454 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gbt2\" (UniqueName: \"kubernetes.io/projected/c4421213-a737-4a28-8be1-bfc870ccfc6d-kube-api-access-4gbt2\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.615231 kubelet[2666]: I0904 17:07:06.614472 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-bin-dir\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.615231 kubelet[2666]: I0904 17:07:06.614489 2666 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-net-dir\") pod \"c4421213-a737-4a28-8be1-bfc870ccfc6d\" (UID: \"c4421213-a737-4a28-8be1-bfc870ccfc6d\") " Sep 4 17:07:06.615388 kubelet[2666]: I0904 17:07:06.614535 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615388 kubelet[2666]: I0904 17:07:06.614567 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615388 kubelet[2666]: I0904 17:07:06.614583 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615388 kubelet[2666]: I0904 17:07:06.614597 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615388 kubelet[2666]: I0904 17:07:06.614612 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615508 kubelet[2666]: I0904 17:07:06.614627 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615508 kubelet[2666]: I0904 17:07:06.615391 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615508 kubelet[2666]: I0904 17:07:06.615428 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-policysync" (OuterVolumeSpecName: "policysync") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615882 kubelet[2666]: I0904 17:07:06.615707 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:07:06.615882 kubelet[2666]: I0904 17:07:06.615808 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4421213-a737-4a28-8be1-bfc870ccfc6d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:07:06.617106 kubelet[2666]: I0904 17:07:06.617069 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4421213-a737-4a28-8be1-bfc870ccfc6d-node-certs" (OuterVolumeSpecName: "node-certs") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:07:06.618881 systemd[1]: var-lib-kubelet-pods-c4421213\x2da737\x2d4a28\x2d8be1\x2dbfc870ccfc6d-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Sep 4 17:07:06.621507 systemd[1]: var-lib-kubelet-pods-c4421213\x2da737\x2d4a28\x2d8be1\x2dbfc870ccfc6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4gbt2.mount: Deactivated successfully. Sep 4 17:07:06.622773 kubelet[2666]: I0904 17:07:06.622719 2666 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4421213-a737-4a28-8be1-bfc870ccfc6d-kube-api-access-4gbt2" (OuterVolumeSpecName: "kube-api-access-4gbt2") pod "c4421213-a737-4a28-8be1-bfc870ccfc6d" (UID: "c4421213-a737-4a28-8be1-bfc870ccfc6d"). InnerVolumeSpecName "kube-api-access-4gbt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:07:06.715383 kubelet[2666]: I0904 17:07:06.715317 2666 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715383 kubelet[2666]: I0904 17:07:06.715374 2666 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4421213-a737-4a28-8be1-bfc870ccfc6d-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715383 kubelet[2666]: I0904 17:07:06.715386 2666 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-policysync\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715383 kubelet[2666]: I0904 17:07:06.715396 2666 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715406 2666 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715418 2666 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4gbt2\" (UniqueName: \"kubernetes.io/projected/c4421213-a737-4a28-8be1-bfc870ccfc6d-kube-api-access-4gbt2\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715427 2666 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715442 2666 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715453 2666 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c4421213-a737-4a28-8be1-bfc870ccfc6d-node-certs\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715465 2666 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-var-run-calico\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715475 2666 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:06.715600 kubelet[2666]: I0904 17:07:06.715486 2666 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c4421213-a737-4a28-8be1-bfc870ccfc6d-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Sep 4 17:07:07.348722 kubelet[2666]: E0904 17:07:07.348679 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgrz" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" Sep 4 17:07:07.449701 kubelet[2666]: I0904 17:07:07.449599 2666 scope.go:117] "RemoveContainer" containerID="48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96" Sep 4 17:07:07.451356 containerd[1545]: time="2024-09-04T17:07:07.451008820Z" level=info msg="RemoveContainer for \"48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96\"" Sep 4 17:07:07.539072 containerd[1545]: time="2024-09-04T17:07:07.538420301Z" level=info msg="RemoveContainer for \"48a46007e3efac499023bca5fc69fcafdf855147806f1929dee6b1572259ce96\" returns successfully" Sep 4 17:07:07.628262 kubelet[2666]: I0904 17:07:07.628157 2666 topology_manager.go:215] "Topology Admit Handler" podUID="692463bc-062d-41ea-84dd-9e23992935a5" podNamespace="calico-system" podName="calico-node-7m96k" Sep 4 17:07:07.628262 kubelet[2666]: E0904 17:07:07.628239 2666 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c4421213-a737-4a28-8be1-bfc870ccfc6d" containerName="flexvol-driver" Sep 4 17:07:07.628262 kubelet[2666]: I0904 17:07:07.628264 2666 memory_manager.go:346] "RemoveStaleState removing state" podUID="c4421213-a737-4a28-8be1-bfc870ccfc6d" containerName="flexvol-driver" Sep 4 17:07:07.824730 kubelet[2666]: I0904 17:07:07.824679 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-xtables-lock\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.824836 kubelet[2666]: I0904 17:07:07.824757 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-policysync\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.824836 kubelet[2666]: I0904 17:07:07.824821 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-flexvol-driver-host\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.824907 kubelet[2666]: I0904 17:07:07.824844 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/692463bc-062d-41ea-84dd-9e23992935a5-tigera-ca-bundle\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.824907 kubelet[2666]: I0904 17:07:07.824864 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-cni-bin-dir\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.824907 kubelet[2666]: I0904 17:07:07.824884 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-cni-log-dir\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.824907 kubelet[2666]: I0904 17:07:07.824905 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-var-lib-calico\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.825002 kubelet[2666]: I0904 17:07:07.824925 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/692463bc-062d-41ea-84dd-9e23992935a5-node-certs\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.825002 kubelet[2666]: I0904 17:07:07.824943 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-cni-net-dir\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.825002 kubelet[2666]: I0904 17:07:07.824962 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpddt\" (UniqueName: \"kubernetes.io/projected/692463bc-062d-41ea-84dd-9e23992935a5-kube-api-access-gpddt\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.825002 kubelet[2666]: I0904 17:07:07.824984 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-lib-modules\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.825002 kubelet[2666]: I0904 17:07:07.825003 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/692463bc-062d-41ea-84dd-9e23992935a5-var-run-calico\") pod \"calico-node-7m96k\" (UID: \"692463bc-062d-41ea-84dd-9e23992935a5\") " pod="calico-system/calico-node-7m96k" Sep 4 17:07:07.938012 kubelet[2666]: E0904 17:07:07.937980 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:07.940005 containerd[1545]: time="2024-09-04T17:07:07.939700731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7m96k,Uid:692463bc-062d-41ea-84dd-9e23992935a5,Namespace:calico-system,Attempt:0,}" Sep 4 17:07:07.959921 containerd[1545]: time="2024-09-04T17:07:07.959813225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:07.959921 containerd[1545]: time="2024-09-04T17:07:07.959881427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:07.959921 containerd[1545]: time="2024-09-04T17:07:07.959909427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:07.959921 containerd[1545]: time="2024-09-04T17:07:07.959922628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:08.029915 containerd[1545]: time="2024-09-04T17:07:08.029844084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7m96k,Uid:692463bc-062d-41ea-84dd-9e23992935a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"92e3423a2d405368bab2da70df2ac458fb948416c0b92d4199d4ac1dfee02e64\"" Sep 4 17:07:08.030565 kubelet[2666]: E0904 17:07:08.030535 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:08.033436 containerd[1545]: time="2024-09-04T17:07:08.033278752Z" level=info msg="CreateContainer within sandbox \"92e3423a2d405368bab2da70df2ac458fb948416c0b92d4199d4ac1dfee02e64\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:07:08.047803 containerd[1545]: time="2024-09-04T17:07:08.047746918Z" level=info msg="CreateContainer within sandbox \"92e3423a2d405368bab2da70df2ac458fb948416c0b92d4199d4ac1dfee02e64\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"70187a73aff9aa7c504589934ad4fb614a26d0e1ae2a12b4a6ff3d390b5a6677\"" Sep 4 17:07:08.048306 containerd[1545]: time="2024-09-04T17:07:08.048271848Z" level=info msg="StartContainer for \"70187a73aff9aa7c504589934ad4fb614a26d0e1ae2a12b4a6ff3d390b5a6677\"" Sep 4 17:07:08.099691 containerd[1545]: time="2024-09-04T17:07:08.099581982Z" level=info msg="StartContainer for \"70187a73aff9aa7c504589934ad4fb614a26d0e1ae2a12b4a6ff3d390b5a6677\" returns successfully" Sep 4 17:07:08.144151 containerd[1545]: time="2024-09-04T17:07:08.143932218Z" level=info msg="shim disconnected" id=70187a73aff9aa7c504589934ad4fb614a26d0e1ae2a12b4a6ff3d390b5a6677 namespace=k8s.io Sep 4 17:07:08.144151 containerd[1545]: time="2024-09-04T17:07:08.144017220Z" level=warning msg="cleaning up after shim disconnected" id=70187a73aff9aa7c504589934ad4fb614a26d0e1ae2a12b4a6ff3d390b5a6677 namespace=k8s.io Sep 4 17:07:08.144151 containerd[1545]: time="2024-09-04T17:07:08.144049901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:07:08.351199 kubelet[2666]: I0904 17:07:08.351091 2666 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="26f688f5-f41e-4a90-8a86-cc5f01dd7199" path="/var/lib/kubelet/pods/26f688f5-f41e-4a90-8a86-cc5f01dd7199/volumes" Sep 4 17:07:08.351870 kubelet[2666]: I0904 17:07:08.351780 2666 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c4421213-a737-4a28-8be1-bfc870ccfc6d" path="/var/lib/kubelet/pods/c4421213-a737-4a28-8be1-bfc870ccfc6d/volumes" Sep 4 17:07:08.464091 kubelet[2666]: E0904 17:07:08.464047 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:08.465482 containerd[1545]: time="2024-09-04T17:07:08.465177366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:07:09.348560 kubelet[2666]: E0904 17:07:09.348514 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgrz" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" Sep 4 17:07:11.110976 containerd[1545]: time="2024-09-04T17:07:11.110930538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:11.111847 containerd[1545]: time="2024-09-04T17:07:11.111515909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:07:11.112598 containerd[1545]: time="2024-09-04T17:07:11.112557127Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:11.114677 containerd[1545]: time="2024-09-04T17:07:11.114642483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:11.116034 containerd[1545]: time="2024-09-04T17:07:11.115993987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.6507733s" Sep 4 17:07:11.116098 containerd[1545]: time="2024-09-04T17:07:11.116033148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:07:11.119049 containerd[1545]: time="2024-09-04T17:07:11.119015160Z" level=info msg="CreateContainer within sandbox \"92e3423a2d405368bab2da70df2ac458fb948416c0b92d4199d4ac1dfee02e64\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:07:11.129283 containerd[1545]: time="2024-09-04T17:07:11.129180938Z" level=info msg="CreateContainer within sandbox \"92e3423a2d405368bab2da70df2ac458fb948416c0b92d4199d4ac1dfee02e64\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f783732dcc36ffdf0ad8b34562283c8a7c20bdbcf9cff78f7975d22bc6e05a5\"" Sep 4 17:07:11.130908 containerd[1545]: time="2024-09-04T17:07:11.130875128Z" level=info msg="StartContainer for \"1f783732dcc36ffdf0ad8b34562283c8a7c20bdbcf9cff78f7975d22bc6e05a5\"" Sep 4 17:07:11.189423 containerd[1545]: time="2024-09-04T17:07:11.189347192Z" level=info msg="StartContainer for \"1f783732dcc36ffdf0ad8b34562283c8a7c20bdbcf9cff78f7975d22bc6e05a5\" returns successfully" Sep 4 17:07:11.348460 kubelet[2666]: E0904 17:07:11.348421 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgrz" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" Sep 4 17:07:11.471949 kubelet[2666]: E0904 17:07:11.471920 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:12.044554 containerd[1545]: time="2024-09-04T17:07:12.044500343Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:07:12.062328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f783732dcc36ffdf0ad8b34562283c8a7c20bdbcf9cff78f7975d22bc6e05a5-rootfs.mount: Deactivated successfully. Sep 4 17:07:12.063785 kubelet[2666]: I0904 17:07:12.063762 2666 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:07:12.072879 containerd[1545]: time="2024-09-04T17:07:12.072641218Z" level=info msg="shim disconnected" id=1f783732dcc36ffdf0ad8b34562283c8a7c20bdbcf9cff78f7975d22bc6e05a5 namespace=k8s.io Sep 4 17:07:12.072879 containerd[1545]: time="2024-09-04T17:07:12.072712419Z" level=warning msg="cleaning up after shim disconnected" id=1f783732dcc36ffdf0ad8b34562283c8a7c20bdbcf9cff78f7975d22bc6e05a5 namespace=k8s.io Sep 4 17:07:12.072879 containerd[1545]: time="2024-09-04T17:07:12.072721459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:07:12.090909 kubelet[2666]: I0904 17:07:12.090762 2666 topology_manager.go:215] "Topology Admit Handler" podUID="5b7b310b-92a2-4d6e-b549-69ce5288993d" podNamespace="kube-system" podName="coredns-5dd5756b68-q2cb6" Sep 4 17:07:12.093062 kubelet[2666]: I0904 17:07:12.092512 2666 topology_manager.go:215] "Topology Admit Handler" podUID="79483386-60ca-499b-a7a4-f16b7727423c" podNamespace="kube-system" podName="coredns-5dd5756b68-kff8p" Sep 4 17:07:12.093955 kubelet[2666]: I0904 17:07:12.093920 2666 topology_manager.go:215] "Topology Admit Handler" podUID="5ff23458-425e-4765-99fb-5da7a5135579" podNamespace="calico-system" podName="calico-kube-controllers-589588c958-qszlt" Sep 4 17:07:12.264343 kubelet[2666]: I0904 17:07:12.264307 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl8j7\" (UniqueName: \"kubernetes.io/projected/79483386-60ca-499b-a7a4-f16b7727423c-kube-api-access-dl8j7\") pod \"coredns-5dd5756b68-kff8p\" (UID: \"79483386-60ca-499b-a7a4-f16b7727423c\") " pod="kube-system/coredns-5dd5756b68-kff8p" Sep 4 17:07:12.264713 kubelet[2666]: I0904 17:07:12.264494 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ff23458-425e-4765-99fb-5da7a5135579-tigera-ca-bundle\") pod \"calico-kube-controllers-589588c958-qszlt\" (UID: \"5ff23458-425e-4765-99fb-5da7a5135579\") " pod="calico-system/calico-kube-controllers-589588c958-qszlt" Sep 4 17:07:12.264713 kubelet[2666]: I0904 17:07:12.264533 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79483386-60ca-499b-a7a4-f16b7727423c-config-volume\") pod \"coredns-5dd5756b68-kff8p\" (UID: \"79483386-60ca-499b-a7a4-f16b7727423c\") " pod="kube-system/coredns-5dd5756b68-kff8p" Sep 4 17:07:12.264713 kubelet[2666]: I0904 17:07:12.264566 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b7b310b-92a2-4d6e-b549-69ce5288993d-config-volume\") pod \"coredns-5dd5756b68-q2cb6\" (UID: \"5b7b310b-92a2-4d6e-b549-69ce5288993d\") " pod="kube-system/coredns-5dd5756b68-q2cb6" Sep 4 17:07:12.264713 kubelet[2666]: I0904 17:07:12.264638 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxgw\" (UniqueName: \"kubernetes.io/projected/5ff23458-425e-4765-99fb-5da7a5135579-kube-api-access-2dxgw\") pod \"calico-kube-controllers-589588c958-qszlt\" (UID: \"5ff23458-425e-4765-99fb-5da7a5135579\") " pod="calico-system/calico-kube-controllers-589588c958-qszlt" Sep 4 17:07:12.264713 kubelet[2666]: I0904 17:07:12.264675 2666 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn48f\" (UniqueName: \"kubernetes.io/projected/5b7b310b-92a2-4d6e-b549-69ce5288993d-kube-api-access-vn48f\") pod \"coredns-5dd5756b68-q2cb6\" (UID: \"5b7b310b-92a2-4d6e-b549-69ce5288993d\") " pod="kube-system/coredns-5dd5756b68-q2cb6" Sep 4 17:07:12.396321 kubelet[2666]: E0904 17:07:12.396288 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:12.397369 containerd[1545]: time="2024-09-04T17:07:12.396976926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q2cb6,Uid:5b7b310b-92a2-4d6e-b549-69ce5288993d,Namespace:kube-system,Attempt:0,}" Sep 4 17:07:12.397764 kubelet[2666]: E0904 17:07:12.397645 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:12.398093 containerd[1545]: time="2024-09-04T17:07:12.398052584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kff8p,Uid:79483386-60ca-499b-a7a4-f16b7727423c,Namespace:kube-system,Attempt:0,}" Sep 4 17:07:12.400311 containerd[1545]: time="2024-09-04T17:07:12.400275982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589588c958-qszlt,Uid:5ff23458-425e-4765-99fb-5da7a5135579,Namespace:calico-system,Attempt:0,}" Sep 4 17:07:12.492949 kubelet[2666]: E0904 17:07:12.492410 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:12.499822 containerd[1545]: time="2024-09-04T17:07:12.499725139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:07:12.716464 containerd[1545]: time="2024-09-04T17:07:12.716349511Z" level=error msg="Failed to destroy network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.716949 containerd[1545]: time="2024-09-04T17:07:12.716893641Z" level=error msg="encountered an error cleaning up failed sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.717090 containerd[1545]: time="2024-09-04T17:07:12.716951722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589588c958-qszlt,Uid:5ff23458-425e-4765-99fb-5da7a5135579,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.717232 kubelet[2666]: E0904 17:07:12.717200 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.717298 kubelet[2666]: E0904 17:07:12.717269 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589588c958-qszlt" Sep 4 17:07:12.717298 kubelet[2666]: E0904 17:07:12.717290 2666 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589588c958-qszlt" Sep 4 17:07:12.717362 kubelet[2666]: E0904 17:07:12.717349 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589588c958-qszlt_calico-system(5ff23458-425e-4765-99fb-5da7a5135579)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589588c958-qszlt_calico-system(5ff23458-425e-4765-99fb-5da7a5135579)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589588c958-qszlt" podUID="5ff23458-425e-4765-99fb-5da7a5135579" Sep 4 17:07:12.717815 containerd[1545]: time="2024-09-04T17:07:12.717780976Z" level=error msg="Failed to destroy network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.718397 containerd[1545]: time="2024-09-04T17:07:12.718364785Z" level=error msg="encountered an error cleaning up failed sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.718451 containerd[1545]: time="2024-09-04T17:07:12.718421146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q2cb6,Uid:5b7b310b-92a2-4d6e-b549-69ce5288993d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.718839 containerd[1545]: time="2024-09-04T17:07:12.718782393Z" level=error msg="Failed to destroy network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.718973 kubelet[2666]: E0904 17:07:12.718953 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.719013 kubelet[2666]: E0904 17:07:12.718998 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-q2cb6" Sep 4 17:07:12.719051 kubelet[2666]: E0904 17:07:12.719016 2666 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-q2cb6" Sep 4 17:07:12.719094 kubelet[2666]: E0904 17:07:12.719077 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-q2cb6_kube-system(5b7b310b-92a2-4d6e-b549-69ce5288993d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-q2cb6_kube-system(5b7b310b-92a2-4d6e-b549-69ce5288993d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-q2cb6" podUID="5b7b310b-92a2-4d6e-b549-69ce5288993d" Sep 4 17:07:12.719259 containerd[1545]: time="2024-09-04T17:07:12.719073877Z" level=error msg="encountered an error cleaning up failed sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.719259 containerd[1545]: time="2024-09-04T17:07:12.719112878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kff8p,Uid:79483386-60ca-499b-a7a4-f16b7727423c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.719327 kubelet[2666]: E0904 17:07:12.719306 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:12.719356 kubelet[2666]: E0904 17:07:12.719337 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kff8p" Sep 4 17:07:12.719424 kubelet[2666]: E0904 17:07:12.719355 2666 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-kff8p" Sep 4 17:07:12.719480 kubelet[2666]: E0904 17:07:12.719461 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-kff8p_kube-system(79483386-60ca-499b-a7a4-f16b7727423c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-kff8p_kube-system(79483386-60ca-499b-a7a4-f16b7727423c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kff8p" podUID="79483386-60ca-499b-a7a4-f16b7727423c" Sep 4 17:07:13.350936 containerd[1545]: time="2024-09-04T17:07:13.350889315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbgrz,Uid:3bc96ff6-744d-455a-9a38-773fca98cdc6,Namespace:calico-system,Attempt:0,}" Sep 4 17:07:13.378779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011-shm.mount: Deactivated successfully. Sep 4 17:07:13.378972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9-shm.mount: Deactivated successfully. Sep 4 17:07:13.379055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571-shm.mount: Deactivated successfully. Sep 4 17:07:13.398745 containerd[1545]: time="2024-09-04T17:07:13.398607050Z" level=error msg="Failed to destroy network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.399432 containerd[1545]: time="2024-09-04T17:07:13.399288862Z" level=error msg="encountered an error cleaning up failed sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.399432 containerd[1545]: time="2024-09-04T17:07:13.399338982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbgrz,Uid:3bc96ff6-744d-455a-9a38-773fca98cdc6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.400309 kubelet[2666]: E0904 17:07:13.400281 2666 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.400567 kubelet[2666]: E0904 17:07:13.400334 2666 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbgrz" Sep 4 17:07:13.400567 kubelet[2666]: E0904 17:07:13.400355 2666 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tbgrz" Sep 4 17:07:13.400567 kubelet[2666]: E0904 17:07:13.400404 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tbgrz_calico-system(3bc96ff6-744d-455a-9a38-773fca98cdc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tbgrz_calico-system(3bc96ff6-744d-455a-9a38-773fca98cdc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbgrz" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" Sep 4 17:07:13.400611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb-shm.mount: Deactivated successfully. Sep 4 17:07:13.497239 kubelet[2666]: I0904 17:07:13.496905 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:13.497678 containerd[1545]: time="2024-09-04T17:07:13.497643900Z" level=info msg="StopPodSandbox for \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\"" Sep 4 17:07:13.497968 containerd[1545]: time="2024-09-04T17:07:13.497912064Z" level=info msg="Ensure that sandbox 6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9 in task-service has been cleanup successfully" Sep 4 17:07:13.498272 kubelet[2666]: I0904 17:07:13.498249 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:13.499371 containerd[1545]: time="2024-09-04T17:07:13.499067363Z" level=info msg="StopPodSandbox for \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\"" Sep 4 17:07:13.499371 containerd[1545]: time="2024-09-04T17:07:13.499301886Z" level=info msg="Ensure that sandbox b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011 in task-service has been cleanup successfully" Sep 4 17:07:13.502010 kubelet[2666]: I0904 17:07:13.501916 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:13.502583 containerd[1545]: time="2024-09-04T17:07:13.502333016Z" level=info msg="StopPodSandbox for \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\"" Sep 4 17:07:13.505314 kubelet[2666]: I0904 17:07:13.504090 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:13.505389 containerd[1545]: time="2024-09-04T17:07:13.504731655Z" level=info msg="StopPodSandbox for \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\"" Sep 4 17:07:13.505389 containerd[1545]: time="2024-09-04T17:07:13.504909338Z" level=info msg="Ensure that sandbox 42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb in task-service has been cleanup successfully" Sep 4 17:07:13.506676 containerd[1545]: time="2024-09-04T17:07:13.506576085Z" level=info msg="Ensure that sandbox 76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571 in task-service has been cleanup successfully" Sep 4 17:07:13.538055 containerd[1545]: time="2024-09-04T17:07:13.537725431Z" level=error msg="StopPodSandbox for \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\" failed" error="failed to destroy network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.543156 kubelet[2666]: E0904 17:07:13.542959 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:13.543156 kubelet[2666]: E0904 17:07:13.543031 2666 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9"} Sep 4 17:07:13.543156 kubelet[2666]: E0904 17:07:13.543069 2666 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79483386-60ca-499b-a7a4-f16b7727423c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:07:13.543156 kubelet[2666]: E0904 17:07:13.543099 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79483386-60ca-499b-a7a4-f16b7727423c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-kff8p" podUID="79483386-60ca-499b-a7a4-f16b7727423c" Sep 4 17:07:13.545497 containerd[1545]: time="2024-09-04T17:07:13.545265793Z" level=error msg="StopPodSandbox for \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\" failed" error="failed to destroy network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.546086 kubelet[2666]: E0904 17:07:13.545496 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:13.546086 kubelet[2666]: E0904 17:07:13.545531 2666 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571"} Sep 4 17:07:13.546086 kubelet[2666]: E0904 17:07:13.545569 2666 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b7b310b-92a2-4d6e-b549-69ce5288993d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:07:13.546086 kubelet[2666]: E0904 17:07:13.545596 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b7b310b-92a2-4d6e-b549-69ce5288993d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-q2cb6" podUID="5b7b310b-92a2-4d6e-b549-69ce5288993d" Sep 4 17:07:13.555804 containerd[1545]: time="2024-09-04T17:07:13.555443879Z" level=error msg="StopPodSandbox for \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\" failed" error="failed to destroy network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.555899 kubelet[2666]: E0904 17:07:13.555682 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:13.555899 kubelet[2666]: E0904 17:07:13.555718 2666 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011"} Sep 4 17:07:13.555899 kubelet[2666]: E0904 17:07:13.555749 2666 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5ff23458-425e-4765-99fb-5da7a5135579\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:07:13.555899 kubelet[2666]: E0904 17:07:13.555778 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5ff23458-425e-4765-99fb-5da7a5135579\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589588c958-qszlt" podUID="5ff23458-425e-4765-99fb-5da7a5135579" Sep 4 17:07:13.558152 containerd[1545]: time="2024-09-04T17:07:13.557726236Z" level=error msg="StopPodSandbox for \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\" failed" error="failed to destroy network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:07:13.558227 kubelet[2666]: E0904 17:07:13.557902 2666 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:13.558227 kubelet[2666]: E0904 17:07:13.557948 2666 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb"} Sep 4 17:07:13.558227 kubelet[2666]: E0904 17:07:13.557974 2666 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bc96ff6-744d-455a-9a38-773fca98cdc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:07:13.558227 kubelet[2666]: E0904 17:07:13.558014 2666 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bc96ff6-744d-455a-9a38-773fca98cdc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tbgrz" podUID="3bc96ff6-744d-455a-9a38-773fca98cdc6" Sep 4 17:07:15.806956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2810918908.mount: Deactivated successfully. Sep 4 17:07:15.895532 containerd[1545]: time="2024-09-04T17:07:15.895058434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:15.895532 containerd[1545]: time="2024-09-04T17:07:15.895470320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:07:15.896239 containerd[1545]: time="2024-09-04T17:07:15.896203971Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:15.898051 containerd[1545]: time="2024-09-04T17:07:15.898016959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:15.899443 containerd[1545]: time="2024-09-04T17:07:15.899407380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.39961968s" Sep 4 17:07:15.899443 containerd[1545]: time="2024-09-04T17:07:15.899447300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:07:15.906957 containerd[1545]: time="2024-09-04T17:07:15.906910093Z" level=info msg="CreateContainer within sandbox \"92e3423a2d405368bab2da70df2ac458fb948416c0b92d4199d4ac1dfee02e64\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:07:15.921612 containerd[1545]: time="2024-09-04T17:07:15.921561755Z" level=info msg="CreateContainer within sandbox \"92e3423a2d405368bab2da70df2ac458fb948416c0b92d4199d4ac1dfee02e64\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"aa2902c5a8a7f8faa78e6618941758a980a3d98ab832f7fe9effbdb88ed81747\"" Sep 4 17:07:15.921640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657206982.mount: Deactivated successfully. Sep 4 17:07:15.922139 containerd[1545]: time="2024-09-04T17:07:15.922090443Z" level=info msg="StartContainer for \"aa2902c5a8a7f8faa78e6618941758a980a3d98ab832f7fe9effbdb88ed81747\"" Sep 4 17:07:15.999538 containerd[1545]: time="2024-09-04T17:07:15.999402373Z" level=info msg="StartContainer for \"aa2902c5a8a7f8faa78e6618941758a980a3d98ab832f7fe9effbdb88ed81747\" returns successfully" Sep 4 17:07:16.233616 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:07:16.233777 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:07:16.515589 kubelet[2666]: E0904 17:07:16.515429 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:16.527943 kubelet[2666]: I0904 17:07:16.527858 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-7m96k" podStartSLOduration=2.092835754 podCreationTimestamp="2024-09-04 17:07:07 +0000 UTC" firstStartedPulling="2024-09-04 17:07:08.464712477 +0000 UTC m=+24.226633057" lastFinishedPulling="2024-09-04 17:07:15.899685264 +0000 UTC m=+31.661605844" observedRunningTime="2024-09-04 17:07:16.526881767 +0000 UTC m=+32.288802387" watchObservedRunningTime="2024-09-04 17:07:16.527808541 +0000 UTC m=+32.289729121" Sep 4 17:07:20.222625 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:55702.service - OpenSSH per-connection server daemon (10.0.0.1:55702). Sep 4 17:07:20.261703 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 55702 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:20.263099 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:20.267054 systemd-logind[1525]: New session 8 of user core. Sep 4 17:07:20.277461 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:07:20.524395 sshd[4195]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:20.528076 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:55702.service: Deactivated successfully. Sep 4 17:07:20.530383 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:07:20.530629 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:07:20.532468 systemd-logind[1525]: Removed session 8. Sep 4 17:07:24.349337 containerd[1545]: time="2024-09-04T17:07:24.349269569Z" level=info msg="StopPodSandbox for \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\"" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.461 [INFO][4323] k8s.go 608: Cleaning up netns ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.462 [INFO][4323] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" iface="eth0" netns="/var/run/netns/cni-8963b07c-8010-32e3-d247-74cc661f26a0" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.463 [INFO][4323] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" iface="eth0" netns="/var/run/netns/cni-8963b07c-8010-32e3-d247-74cc661f26a0" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.465 [INFO][4323] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" iface="eth0" netns="/var/run/netns/cni-8963b07c-8010-32e3-d247-74cc661f26a0" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.465 [INFO][4323] k8s.go 615: Releasing IP address(es) ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.465 [INFO][4323] utils.go 188: Calico CNI releasing IP address ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.582 [INFO][4331] ipam_plugin.go 417: Releasing address using handleID ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.583 [INFO][4331] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.583 [INFO][4331] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.592 [WARNING][4331] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.592 [INFO][4331] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.594 [INFO][4331] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:24.599177 containerd[1545]: 2024-09-04 17:07:24.596 [INFO][4323] k8s.go 621: Teardown processing complete. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:24.600571 systemd[1]: run-netns-cni\x2d8963b07c\x2d8010\x2d32e3\x2dd247\x2d74cc661f26a0.mount: Deactivated successfully. Sep 4 17:07:24.600917 containerd[1545]: time="2024-09-04T17:07:24.600851157Z" level=info msg="TearDown network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\" successfully" Sep 4 17:07:24.600917 containerd[1545]: time="2024-09-04T17:07:24.600883917Z" level=info msg="StopPodSandbox for \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\" returns successfully" Sep 4 17:07:24.601838 containerd[1545]: time="2024-09-04T17:07:24.601470484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589588c958-qszlt,Uid:5ff23458-425e-4765-99fb-5da7a5135579,Namespace:calico-system,Attempt:1,}" Sep 4 17:07:24.732778 systemd-networkd[1238]: cali0063b7323ac: Link UP Sep 4 17:07:24.732915 systemd-networkd[1238]: cali0063b7323ac: Gained carrier Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.658 [INFO][4340] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.670 [INFO][4340] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0 calico-kube-controllers-589588c958- calico-system 5ff23458-425e-4765-99fb-5da7a5135579 818 0 2024-09-04 17:07:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:589588c958 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-589588c958-qszlt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0063b7323ac [] []}} ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.670 [INFO][4340] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.694 [INFO][4353] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" HandleID="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.704 [INFO][4353] ipam_plugin.go 270: Auto assigning IP ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" HandleID="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-589588c958-qszlt", "timestamp":"2024-09-04 17:07:24.694276877 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.704 [INFO][4353] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.704 [INFO][4353] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.705 [INFO][4353] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.706 [INFO][4353] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.710 [INFO][4353] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.714 [INFO][4353] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.715 [INFO][4353] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.717 [INFO][4353] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.717 [INFO][4353] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.718 [INFO][4353] ipam.go 1685: Creating new handle: k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8 Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.721 [INFO][4353] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.725 [INFO][4353] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.725 [INFO][4353] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" host="localhost" Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.725 [INFO][4353] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:24.748451 containerd[1545]: 2024-09-04 17:07:24.725 [INFO][4353] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" HandleID="k8s-pod-network.9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.748998 containerd[1545]: 2024-09-04 17:07:24.727 [INFO][4340] k8s.go 386: Populated endpoint ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0", GenerateName:"calico-kube-controllers-589588c958-", Namespace:"calico-system", SelfLink:"", UID:"5ff23458-425e-4765-99fb-5da7a5135579", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589588c958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-589588c958-qszlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0063b7323ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:24.748998 containerd[1545]: 2024-09-04 17:07:24.727 [INFO][4340] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.748998 containerd[1545]: 2024-09-04 17:07:24.727 [INFO][4340] dataplane_linux.go 68: Setting the host side veth name to cali0063b7323ac ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.748998 containerd[1545]: 2024-09-04 17:07:24.732 [INFO][4340] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.748998 containerd[1545]: 2024-09-04 17:07:24.733 [INFO][4340] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0", GenerateName:"calico-kube-controllers-589588c958-", Namespace:"calico-system", SelfLink:"", UID:"5ff23458-425e-4765-99fb-5da7a5135579", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589588c958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8", Pod:"calico-kube-controllers-589588c958-qszlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0063b7323ac", MAC:"2e:6f:54:de:34:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:24.748998 containerd[1545]: 2024-09-04 17:07:24.741 [INFO][4340] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8" Namespace="calico-system" Pod="calico-kube-controllers-589588c958-qszlt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:24.766730 containerd[1545]: time="2024-09-04T17:07:24.766345550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:24.766730 containerd[1545]: time="2024-09-04T17:07:24.766395590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:24.766730 containerd[1545]: time="2024-09-04T17:07:24.766409430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:24.766730 containerd[1545]: time="2024-09-04T17:07:24.766420351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:24.788819 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:07:24.825765 containerd[1545]: time="2024-09-04T17:07:24.825692156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589588c958-qszlt,Uid:5ff23458-425e-4765-99fb-5da7a5135579,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8\"" Sep 4 17:07:24.827270 containerd[1545]: time="2024-09-04T17:07:24.827241894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:07:25.349618 containerd[1545]: time="2024-09-04T17:07:25.349570232Z" level=info msg="StopPodSandbox for \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\"" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.387 [INFO][4455] k8s.go 608: Cleaning up netns ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.388 [INFO][4455] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" iface="eth0" netns="/var/run/netns/cni-bc3325a1-83f2-fbb6-1068-ba732c397f48" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.388 [INFO][4455] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" iface="eth0" netns="/var/run/netns/cni-bc3325a1-83f2-fbb6-1068-ba732c397f48" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.388 [INFO][4455] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" iface="eth0" netns="/var/run/netns/cni-bc3325a1-83f2-fbb6-1068-ba732c397f48" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.389 [INFO][4455] k8s.go 615: Releasing IP address(es) ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.389 [INFO][4455] utils.go 188: Calico CNI releasing IP address ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.406 [INFO][4462] ipam_plugin.go 417: Releasing address using handleID ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.407 [INFO][4462] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.407 [INFO][4462] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.414 [WARNING][4462] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.414 [INFO][4462] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.416 [INFO][4462] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:25.419097 containerd[1545]: 2024-09-04 17:07:25.417 [INFO][4455] k8s.go 621: Teardown processing complete. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:25.420482 containerd[1545]: time="2024-09-04T17:07:25.419220978Z" level=info msg="TearDown network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\" successfully" Sep 4 17:07:25.420482 containerd[1545]: time="2024-09-04T17:07:25.419247698Z" level=info msg="StopPodSandbox for \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\" returns successfully" Sep 4 17:07:25.420482 containerd[1545]: time="2024-09-04T17:07:25.419888665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kff8p,Uid:79483386-60ca-499b-a7a4-f16b7727423c,Namespace:kube-system,Attempt:1,}" Sep 4 17:07:25.420615 kubelet[2666]: E0904 17:07:25.419560 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:25.524051 systemd-networkd[1238]: cali990de55b8c9: Link UP Sep 4 17:07:25.525231 systemd-networkd[1238]: cali990de55b8c9: Gained carrier Sep 4 17:07:25.535084 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:46604.service - OpenSSH per-connection server daemon (10.0.0.1:46604). Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.447 [INFO][4476] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.460 [INFO][4476] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--kff8p-eth0 coredns-5dd5756b68- kube-system 79483386-60ca-499b-a7a4-f16b7727423c 832 0 2024-09-04 17:06:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-kff8p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali990de55b8c9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.461 [INFO][4476] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.488 [INFO][4485] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" HandleID="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.499 [INFO][4485] ipam_plugin.go 270: Auto assigning IP ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" HandleID="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034c0b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-kff8p", "timestamp":"2024-09-04 17:07:25.488274516 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.499 [INFO][4485] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.499 [INFO][4485] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.499 [INFO][4485] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.500 [INFO][4485] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.503 [INFO][4485] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.507 [INFO][4485] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.508 [INFO][4485] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.510 [INFO][4485] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.510 [INFO][4485] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.512 [INFO][4485] ipam.go 1685: Creating new handle: k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8 Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.515 [INFO][4485] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.519 [INFO][4485] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.519 [INFO][4485] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" host="localhost" Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.519 [INFO][4485] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:25.536403 containerd[1545]: 2024-09-04 17:07:25.519 [INFO][4485] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" HandleID="k8s-pod-network.95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.537179 containerd[1545]: 2024-09-04 17:07:25.521 [INFO][4476] k8s.go 386: Populated endpoint ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kff8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"79483386-60ca-499b-a7a4-f16b7727423c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-kff8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali990de55b8c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:25.537179 containerd[1545]: 2024-09-04 17:07:25.521 [INFO][4476] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.537179 containerd[1545]: 2024-09-04 17:07:25.521 [INFO][4476] dataplane_linux.go 68: Setting the host side veth name to cali990de55b8c9 ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.537179 containerd[1545]: 2024-09-04 17:07:25.523 [INFO][4476] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.537179 containerd[1545]: 2024-09-04 17:07:25.523 [INFO][4476] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kff8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"79483386-60ca-499b-a7a4-f16b7727423c", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8", Pod:"coredns-5dd5756b68-kff8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali990de55b8c9", MAC:"5a:72:3c:ae:51:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:25.537179 containerd[1545]: 2024-09-04 17:07:25.533 [INFO][4476] k8s.go 500: Wrote updated endpoint to datastore ContainerID="95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8" Namespace="kube-system" Pod="coredns-5dd5756b68-kff8p" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:25.556791 containerd[1545]: time="2024-09-04T17:07:25.556680448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:25.556791 containerd[1545]: time="2024-09-04T17:07:25.556735848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:25.556999 containerd[1545]: time="2024-09-04T17:07:25.556755088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:25.557078 containerd[1545]: time="2024-09-04T17:07:25.556900970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:25.577366 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:07:25.580608 sshd[4494]: Accepted publickey for core from 10.0.0.1 port 46604 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:25.581908 sshd[4494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:25.587293 systemd-logind[1525]: New session 9 of user core. Sep 4 17:07:25.591635 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:07:25.598053 containerd[1545]: time="2024-09-04T17:07:25.598009474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-kff8p,Uid:79483386-60ca-499b-a7a4-f16b7727423c,Namespace:kube-system,Attempt:1,} returns sandbox id \"95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8\"" Sep 4 17:07:25.598956 kubelet[2666]: E0904 17:07:25.598931 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:25.603187 systemd[1]: run-netns-cni\x2dbc3325a1\x2d83f2\x2dfbb6\x2d1068\x2dba732c397f48.mount: Deactivated successfully. Sep 4 17:07:25.606243 containerd[1545]: time="2024-09-04T17:07:25.605861682Z" level=info msg="CreateContainer within sandbox \"95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:07:25.624501 containerd[1545]: time="2024-09-04T17:07:25.624454892Z" level=info msg="CreateContainer within sandbox \"95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e37de12e4d02cad2c165337a5b87cdbeac1d25546d4ec1ec3cc6a8f54577b12e\"" Sep 4 17:07:25.625209 containerd[1545]: time="2024-09-04T17:07:25.625084379Z" level=info msg="StartContainer for \"e37de12e4d02cad2c165337a5b87cdbeac1d25546d4ec1ec3cc6a8f54577b12e\"" Sep 4 17:07:25.691098 containerd[1545]: time="2024-09-04T17:07:25.691043562Z" level=info msg="StartContainer for \"e37de12e4d02cad2c165337a5b87cdbeac1d25546d4ec1ec3cc6a8f54577b12e\" returns successfully" Sep 4 17:07:25.766908 systemd-networkd[1238]: cali0063b7323ac: Gained IPv6LL Sep 4 17:07:25.828977 sshd[4494]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:25.832112 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:46604.service: Deactivated successfully. Sep 4 17:07:25.835497 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:07:25.835927 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:07:25.837185 systemd-logind[1525]: Removed session 9. Sep 4 17:07:26.101450 kubelet[2666]: I0904 17:07:26.100418 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:07:26.108467 kubelet[2666]: E0904 17:07:26.108441 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:26.342638 containerd[1545]: time="2024-09-04T17:07:26.342593898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:26.343693 containerd[1545]: time="2024-09-04T17:07:26.343656990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:07:26.344738 containerd[1545]: time="2024-09-04T17:07:26.344684961Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:26.346683 containerd[1545]: time="2024-09-04T17:07:26.346623943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:26.347748 containerd[1545]: time="2024-09-04T17:07:26.347468552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.520190778s" Sep 4 17:07:26.347748 containerd[1545]: time="2024-09-04T17:07:26.347523032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:07:26.355217 containerd[1545]: time="2024-09-04T17:07:26.354983315Z" level=info msg="CreateContainer within sandbox \"9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:07:26.365203 containerd[1545]: time="2024-09-04T17:07:26.365162187Z" level=info msg="CreateContainer within sandbox \"9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"161ee01a75674f25f75ab6dcf91ea369cba8648c35ed5390eeb3b845719a0aee\"" Sep 4 17:07:26.366079 containerd[1545]: time="2024-09-04T17:07:26.365685792Z" level=info msg="StartContainer for \"161ee01a75674f25f75ab6dcf91ea369cba8648c35ed5390eeb3b845719a0aee\"" Sep 4 17:07:26.418277 containerd[1545]: time="2024-09-04T17:07:26.418225531Z" level=info msg="StartContainer for \"161ee01a75674f25f75ab6dcf91ea369cba8648c35ed5390eeb3b845719a0aee\" returns successfully" Sep 4 17:07:26.555564 kubelet[2666]: E0904 17:07:26.555509 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:26.558157 kubelet[2666]: E0904 17:07:26.556229 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:26.562497 kubelet[2666]: I0904 17:07:26.562455 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-589588c958-qszlt" podStartSLOduration=22.041185249 podCreationTimestamp="2024-09-04 17:07:03 +0000 UTC" firstStartedPulling="2024-09-04 17:07:24.826834889 +0000 UTC m=+40.588755469" lastFinishedPulling="2024-09-04 17:07:26.347974397 +0000 UTC m=+42.109894977" observedRunningTime="2024-09-04 17:07:26.561988154 +0000 UTC m=+42.323908734" watchObservedRunningTime="2024-09-04 17:07:26.562324757 +0000 UTC m=+42.324245377" Sep 4 17:07:26.637235 kubelet[2666]: I0904 17:07:26.637111 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-kff8p" podStartSLOduration=30.63707182 podCreationTimestamp="2024-09-04 17:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:07:26.607301852 +0000 UTC m=+42.369222392" watchObservedRunningTime="2024-09-04 17:07:26.63707182 +0000 UTC m=+42.398992400" Sep 4 17:07:26.662292 systemd-networkd[1238]: cali990de55b8c9: Gained IPv6LL Sep 4 17:07:26.821478 kubelet[2666]: I0904 17:07:26.820370 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:07:26.821478 kubelet[2666]: E0904 17:07:26.821166 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:26.878633 systemd[1]: run-containerd-runc-k8s.io-aa2902c5a8a7f8faa78e6618941758a980a3d98ab832f7fe9effbdb88ed81747-runc.1LJAIF.mount: Deactivated successfully. Sep 4 17:07:26.934152 kernel: bpftool[4716]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:07:27.149712 systemd-networkd[1238]: vxlan.calico: Link UP Sep 4 17:07:27.149719 systemd-networkd[1238]: vxlan.calico: Gained carrier Sep 4 17:07:27.351917 containerd[1545]: time="2024-09-04T17:07:27.351227516Z" level=info msg="StopPodSandbox for \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\"" Sep 4 17:07:27.351917 containerd[1545]: time="2024-09-04T17:07:27.351889883Z" level=info msg="StopPodSandbox for \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\"" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.423 [INFO][4868] k8s.go 608: Cleaning up netns ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.423 [INFO][4868] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" iface="eth0" netns="/var/run/netns/cni-bcde3072-5811-cab0-1905-c9099cb38c31" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4868] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" iface="eth0" netns="/var/run/netns/cni-bcde3072-5811-cab0-1905-c9099cb38c31" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4868] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" iface="eth0" netns="/var/run/netns/cni-bcde3072-5811-cab0-1905-c9099cb38c31" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4868] k8s.go 615: Releasing IP address(es) ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4868] utils.go 188: Calico CNI releasing IP address ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.446 [INFO][4899] ipam_plugin.go 417: Releasing address using handleID ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.446 [INFO][4899] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.446 [INFO][4899] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.454 [WARNING][4899] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.454 [INFO][4899] ipam_plugin.go 445: Releasing address using workloadID ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.456 [INFO][4899] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:27.461233 containerd[1545]: 2024-09-04 17:07:27.457 [INFO][4868] k8s.go 621: Teardown processing complete. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:27.464482 containerd[1545]: time="2024-09-04T17:07:27.462682755Z" level=info msg="TearDown network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\" successfully" Sep 4 17:07:27.464482 containerd[1545]: time="2024-09-04T17:07:27.462715995Z" level=info msg="StopPodSandbox for \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\" returns successfully" Sep 4 17:07:27.464253 systemd[1]: run-netns-cni\x2dbcde3072\x2d5811\x2dcab0\x2d1905\x2dc9099cb38c31.mount: Deactivated successfully. Sep 4 17:07:27.464750 kubelet[2666]: E0904 17:07:27.463038 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:27.465696 containerd[1545]: time="2024-09-04T17:07:27.465002580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q2cb6,Uid:5b7b310b-92a2-4d6e-b549-69ce5288993d,Namespace:kube-system,Attempt:1,}" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.423 [INFO][4867] k8s.go 608: Cleaning up netns ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4867] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" iface="eth0" netns="/var/run/netns/cni-9672ce3a-f326-35be-9160-aee01964672e" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4867] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" iface="eth0" netns="/var/run/netns/cni-9672ce3a-f326-35be-9160-aee01964672e" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4867] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" iface="eth0" netns="/var/run/netns/cni-9672ce3a-f326-35be-9160-aee01964672e" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4867] k8s.go 615: Releasing IP address(es) ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.424 [INFO][4867] utils.go 188: Calico CNI releasing IP address ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.446 [INFO][4898] ipam_plugin.go 417: Releasing address using handleID ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.446 [INFO][4898] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.456 [INFO][4898] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.468 [WARNING][4898] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.468 [INFO][4898] ipam_plugin.go 445: Releasing address using workloadID ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.531 [INFO][4898] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:27.535762 containerd[1545]: 2024-09-04 17:07:27.533 [INFO][4867] k8s.go 621: Teardown processing complete. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:27.536319 containerd[1545]: time="2024-09-04T17:07:27.535905103Z" level=info msg="TearDown network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\" successfully" Sep 4 17:07:27.536319 containerd[1545]: time="2024-09-04T17:07:27.535931343Z" level=info msg="StopPodSandbox for \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\" returns successfully" Sep 4 17:07:27.536621 containerd[1545]: time="2024-09-04T17:07:27.536592190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbgrz,Uid:3bc96ff6-744d-455a-9a38-773fca98cdc6,Namespace:calico-system,Attempt:1,}" Sep 4 17:07:27.556459 kubelet[2666]: I0904 17:07:27.556375 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:07:27.558320 kubelet[2666]: E0904 17:07:27.557001 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:27.603540 systemd[1]: run-netns-cni\x2d9672ce3a\x2df326\x2d35be\x2d9160\x2daee01964672e.mount: Deactivated successfully. Sep 4 17:07:27.663216 systemd-networkd[1238]: calib27e72d11ee: Link UP Sep 4 17:07:27.663418 systemd-networkd[1238]: calib27e72d11ee: Gained carrier Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.579 [INFO][4916] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--q2cb6-eth0 coredns-5dd5756b68- kube-system 5b7b310b-92a2-4d6e-b549-69ce5288993d 886 0 2024-09-04 17:06:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-q2cb6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib27e72d11ee [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.579 [INFO][4916] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.616 [INFO][4944] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" HandleID="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.630 [INFO][4944] ipam_plugin.go 270: Auto assigning IP ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" HandleID="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000300890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-q2cb6", "timestamp":"2024-09-04 17:07:27.616936375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.630 [INFO][4944] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.630 [INFO][4944] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.630 [INFO][4944] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.633 [INFO][4944] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.637 [INFO][4944] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.642 [INFO][4944] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.645 [INFO][4944] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.647 [INFO][4944] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.647 [INFO][4944] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.648 [INFO][4944] ipam.go 1685: Creating new handle: k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96 Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.651 [INFO][4944] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.656 [INFO][4944] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.656 [INFO][4944] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" host="localhost" Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.656 [INFO][4944] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:27.681475 containerd[1545]: 2024-09-04 17:07:27.656 [INFO][4944] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" HandleID="k8s-pod-network.3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.682007 containerd[1545]: 2024-09-04 17:07:27.659 [INFO][4916] k8s.go 386: Populated endpoint ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--q2cb6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5b7b310b-92a2-4d6e-b549-69ce5288993d", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-q2cb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib27e72d11ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:27.682007 containerd[1545]: 2024-09-04 17:07:27.659 [INFO][4916] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.682007 containerd[1545]: 2024-09-04 17:07:27.659 [INFO][4916] dataplane_linux.go 68: Setting the host side veth name to calib27e72d11ee ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.682007 containerd[1545]: 2024-09-04 17:07:27.664 [INFO][4916] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.682007 containerd[1545]: 2024-09-04 17:07:27.665 [INFO][4916] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--q2cb6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5b7b310b-92a2-4d6e-b549-69ce5288993d", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96", Pod:"coredns-5dd5756b68-q2cb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib27e72d11ee", MAC:"36:ad:65:b1:0a:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:27.682007 containerd[1545]: 2024-09-04 17:07:27.676 [INFO][4916] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96" Namespace="kube-system" Pod="coredns-5dd5756b68-q2cb6" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:27.700460 systemd-networkd[1238]: cali82bbbbeb400: Link UP Sep 4 17:07:27.700669 systemd-networkd[1238]: cali82bbbbeb400: Gained carrier Sep 4 17:07:27.714887 containerd[1545]: time="2024-09-04T17:07:27.714780028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:27.714887 containerd[1545]: time="2024-09-04T17:07:27.714837748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:27.714887 containerd[1545]: time="2024-09-04T17:07:27.714852948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:27.714887 containerd[1545]: time="2024-09-04T17:07:27.714863668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.593 [INFO][4928] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--tbgrz-eth0 csi-node-driver- calico-system 3bc96ff6-744d-455a-9a38-773fca98cdc6 887 0 2024-09-04 17:07:03 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-tbgrz eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali82bbbbeb400 [] []}} ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.594 [INFO][4928] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.621 [INFO][4952] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" HandleID="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.635 [INFO][4952] ipam_plugin.go 270: Auto assigning IP ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" HandleID="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012c410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-tbgrz", "timestamp":"2024-09-04 17:07:27.621577345 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.635 [INFO][4952] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.656 [INFO][4952] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.656 [INFO][4952] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.658 [INFO][4952] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.668 [INFO][4952] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.677 [INFO][4952] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.678 [INFO][4952] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.682 [INFO][4952] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.682 [INFO][4952] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.685 [INFO][4952] ipam.go 1685: Creating new handle: k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.689 [INFO][4952] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.696 [INFO][4952] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.696 [INFO][4952] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" host="localhost" Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.696 [INFO][4952] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:27.720234 containerd[1545]: 2024-09-04 17:07:27.696 [INFO][4952] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" HandleID="k8s-pod-network.a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.720795 containerd[1545]: 2024-09-04 17:07:27.698 [INFO][4928] k8s.go 386: Populated endpoint ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbgrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bc96ff6-744d-455a-9a38-773fca98cdc6", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-tbgrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82bbbbeb400", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:27.720795 containerd[1545]: 2024-09-04 17:07:27.699 [INFO][4928] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.720795 containerd[1545]: 2024-09-04 17:07:27.699 [INFO][4928] dataplane_linux.go 68: Setting the host side veth name to cali82bbbbeb400 ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.720795 containerd[1545]: 2024-09-04 17:07:27.700 [INFO][4928] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.720795 containerd[1545]: 2024-09-04 17:07:27.701 [INFO][4928] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbgrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bc96ff6-744d-455a-9a38-773fca98cdc6", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c", Pod:"csi-node-driver-tbgrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82bbbbeb400", MAC:"9e:fc:e3:23:b7:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:27.720795 containerd[1545]: 2024-09-04 17:07:27.713 [INFO][4928] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c" Namespace="calico-system" Pod="csi-node-driver-tbgrz" WorkloadEndpoint="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:27.748589 systemd[1]: run-containerd-runc-k8s.io-3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96-runc.bKr1da.mount: Deactivated successfully. Sep 4 17:07:27.754981 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:07:27.759936 containerd[1545]: time="2024-09-04T17:07:27.759841472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:07:27.759936 containerd[1545]: time="2024-09-04T17:07:27.759905513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:27.759936 containerd[1545]: time="2024-09-04T17:07:27.759919513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:07:27.760201 containerd[1545]: time="2024-09-04T17:07:27.759994714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:07:27.783176 containerd[1545]: time="2024-09-04T17:07:27.783095883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q2cb6,Uid:5b7b310b-92a2-4d6e-b549-69ce5288993d,Namespace:kube-system,Attempt:1,} returns sandbox id \"3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96\"" Sep 4 17:07:27.784340 kubelet[2666]: E0904 17:07:27.783841 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:27.786574 containerd[1545]: time="2024-09-04T17:07:27.786534680Z" level=info msg="CreateContainer within sandbox \"3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:07:27.787559 systemd-resolved[1449]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:07:27.799719 containerd[1545]: time="2024-09-04T17:07:27.799683901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbgrz,Uid:3bc96ff6-744d-455a-9a38-773fca98cdc6,Namespace:calico-system,Attempt:1,} returns sandbox id \"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c\"" Sep 4 17:07:27.801412 containerd[1545]: time="2024-09-04T17:07:27.801256318Z" level=info msg="CreateContainer within sandbox \"3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f370763b8e129473616f5119fb7d5c1cc1ebde483122e2e569202c53387d02c9\"" Sep 4 17:07:27.801716 containerd[1545]: time="2024-09-04T17:07:27.801695843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:07:27.802001 containerd[1545]: time="2024-09-04T17:07:27.801888125Z" level=info msg="StartContainer for \"f370763b8e129473616f5119fb7d5c1cc1ebde483122e2e569202c53387d02c9\"" Sep 4 17:07:27.847655 containerd[1545]: time="2024-09-04T17:07:27.847603817Z" level=info msg="StartContainer for \"f370763b8e129473616f5119fb7d5c1cc1ebde483122e2e569202c53387d02c9\" returns successfully" Sep 4 17:07:28.390354 systemd-networkd[1238]: vxlan.calico: Gained IPv6LL Sep 4 17:07:28.562760 kubelet[2666]: E0904 17:07:28.561575 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:28.562760 kubelet[2666]: E0904 17:07:28.562319 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:28.586792 kubelet[2666]: I0904 17:07:28.586750 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q2cb6" podStartSLOduration=32.586713273 podCreationTimestamp="2024-09-04 17:06:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:07:28.575561796 +0000 UTC m=+44.337482376" watchObservedRunningTime="2024-09-04 17:07:28.586713273 +0000 UTC m=+44.348633853" Sep 4 17:07:28.902447 systemd-networkd[1238]: cali82bbbbeb400: Gained IPv6LL Sep 4 17:07:28.903106 systemd-networkd[1238]: calib27e72d11ee: Gained IPv6LL Sep 4 17:07:28.967532 containerd[1545]: time="2024-09-04T17:07:28.967473322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:28.968064 containerd[1545]: time="2024-09-04T17:07:28.967932966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:07:28.968852 containerd[1545]: time="2024-09-04T17:07:28.968821496Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:28.971588 containerd[1545]: time="2024-09-04T17:07:28.971551284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:28.972312 containerd[1545]: time="2024-09-04T17:07:28.972221771Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.170405927s" Sep 4 17:07:28.972312 containerd[1545]: time="2024-09-04T17:07:28.972257972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:07:28.974354 containerd[1545]: time="2024-09-04T17:07:28.974319954Z" level=info msg="CreateContainer within sandbox \"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:07:28.991793 containerd[1545]: time="2024-09-04T17:07:28.991682616Z" level=info msg="CreateContainer within sandbox \"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b9306c3a057a4796fc3fc6cc953b3046dc713309cad004da6118ace6ef15b659\"" Sep 4 17:07:28.992238 containerd[1545]: time="2024-09-04T17:07:28.992200542Z" level=info msg="StartContainer for \"b9306c3a057a4796fc3fc6cc953b3046dc713309cad004da6118ace6ef15b659\"" Sep 4 17:07:29.053986 containerd[1545]: time="2024-09-04T17:07:29.053874539Z" level=info msg="StartContainer for \"b9306c3a057a4796fc3fc6cc953b3046dc713309cad004da6118ace6ef15b659\" returns successfully" Sep 4 17:07:29.055134 containerd[1545]: time="2024-09-04T17:07:29.055098672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:07:29.565293 kubelet[2666]: E0904 17:07:29.565249 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:30.442021 containerd[1545]: time="2024-09-04T17:07:30.441958997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:30.442713 containerd[1545]: time="2024-09-04T17:07:30.442660164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:07:30.443216 containerd[1545]: time="2024-09-04T17:07:30.443179530Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:30.445790 containerd[1545]: time="2024-09-04T17:07:30.445749676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:07:30.446787 containerd[1545]: time="2024-09-04T17:07:30.446291721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.391139528s" Sep 4 17:07:30.446787 containerd[1545]: time="2024-09-04T17:07:30.446324801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:07:30.448762 containerd[1545]: time="2024-09-04T17:07:30.448723426Z" level=info msg="CreateContainer within sandbox \"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:07:30.463409 containerd[1545]: time="2024-09-04T17:07:30.463356293Z" level=info msg="CreateContainer within sandbox \"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f9efb699abfb12a16d4f6e27c9a509c20a7b70a4b6f9bbdb98a1001275b2b8f6\"" Sep 4 17:07:30.463941 containerd[1545]: time="2024-09-04T17:07:30.463874019Z" level=info msg="StartContainer for \"f9efb699abfb12a16d4f6e27c9a509c20a7b70a4b6f9bbdb98a1001275b2b8f6\"" Sep 4 17:07:30.524069 containerd[1545]: time="2024-09-04T17:07:30.524022466Z" level=info msg="StartContainer for \"f9efb699abfb12a16d4f6e27c9a509c20a7b70a4b6f9bbdb98a1001275b2b8f6\" returns successfully" Sep 4 17:07:30.569067 kubelet[2666]: E0904 17:07:30.569036 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:30.845396 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:46666.service - OpenSSH per-connection server daemon (10.0.0.1:46666). Sep 4 17:07:30.895223 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 46666 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:30.896899 sshd[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:30.903499 systemd-logind[1525]: New session 10 of user core. Sep 4 17:07:30.910496 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:07:31.120501 sshd[5198]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:31.130058 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:46680.service - OpenSSH per-connection server daemon (10.0.0.1:46680). Sep 4 17:07:31.131260 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:46666.service: Deactivated successfully. Sep 4 17:07:31.133186 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:07:31.140986 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:07:31.145423 systemd-logind[1525]: Removed session 10. Sep 4 17:07:31.168555 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 46680 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:31.169955 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:31.173921 systemd-logind[1525]: New session 11 of user core. Sep 4 17:07:31.181412 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:07:31.454036 kubelet[2666]: I0904 17:07:31.453837 2666 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:07:31.454868 kubelet[2666]: I0904 17:07:31.454813 2666 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:07:31.482497 sshd[5212]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:31.499909 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:46684.service - OpenSSH per-connection server daemon (10.0.0.1:46684). Sep 4 17:07:31.502046 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:46680.service: Deactivated successfully. Sep 4 17:07:31.508071 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:07:31.514820 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:07:31.516064 systemd-logind[1525]: Removed session 11. Sep 4 17:07:31.554908 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 46684 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:31.556326 sshd[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:31.560944 systemd-logind[1525]: New session 12 of user core. Sep 4 17:07:31.570488 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:07:31.720370 sshd[5225]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:31.723994 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:46684.service: Deactivated successfully. Sep 4 17:07:31.726271 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:07:31.726279 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:07:31.727507 systemd-logind[1525]: Removed session 12. Sep 4 17:07:35.807853 kubelet[2666]: I0904 17:07:35.807790 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:07:35.859890 kubelet[2666]: I0904 17:07:35.859832 2666 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-tbgrz" podStartSLOduration=30.214147705 podCreationTimestamp="2024-09-04 17:07:03 +0000 UTC" firstStartedPulling="2024-09-04 17:07:27.800911234 +0000 UTC m=+43.562831814" lastFinishedPulling="2024-09-04 17:07:30.446553884 +0000 UTC m=+46.208474464" observedRunningTime="2024-09-04 17:07:30.61147367 +0000 UTC m=+46.373394250" watchObservedRunningTime="2024-09-04 17:07:35.859790355 +0000 UTC m=+51.621710935" Sep 4 17:07:36.729388 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:46582.service - OpenSSH per-connection server daemon (10.0.0.1:46582). Sep 4 17:07:36.764049 sshd[5289]: Accepted publickey for core from 10.0.0.1 port 46582 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:36.765420 sshd[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:36.770946 systemd-logind[1525]: New session 13 of user core. Sep 4 17:07:36.781473 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:07:36.911065 sshd[5289]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:36.922632 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:46612.service - OpenSSH per-connection server daemon (10.0.0.1:46612). Sep 4 17:07:36.923010 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:46582.service: Deactivated successfully. Sep 4 17:07:36.928171 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:07:36.928237 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:07:36.931873 systemd-logind[1525]: Removed session 13. Sep 4 17:07:36.972038 sshd[5301]: Accepted publickey for core from 10.0.0.1 port 46612 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:36.973551 sshd[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:36.977273 systemd-logind[1525]: New session 14 of user core. Sep 4 17:07:36.986356 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:07:37.221474 sshd[5301]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:37.227644 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:46624.service - OpenSSH per-connection server daemon (10.0.0.1:46624). Sep 4 17:07:37.228159 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:46612.service: Deactivated successfully. Sep 4 17:07:37.231373 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:07:37.233512 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:07:37.234658 systemd-logind[1525]: Removed session 14. Sep 4 17:07:37.271068 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 46624 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:37.272464 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:37.281222 systemd-logind[1525]: New session 15 of user core. Sep 4 17:07:37.287377 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:07:38.126637 sshd[5314]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:38.138471 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:46682.service - OpenSSH per-connection server daemon (10.0.0.1:46682). Sep 4 17:07:38.145400 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:46624.service: Deactivated successfully. Sep 4 17:07:38.152368 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:07:38.154292 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:07:38.156528 systemd-logind[1525]: Removed session 15. Sep 4 17:07:38.186963 sshd[5350]: Accepted publickey for core from 10.0.0.1 port 46682 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:38.190214 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:38.195992 systemd-logind[1525]: New session 16 of user core. Sep 4 17:07:38.205353 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:07:38.631030 sshd[5350]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:38.638395 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:46690.service - OpenSSH per-connection server daemon (10.0.0.1:46690). Sep 4 17:07:38.638803 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:46682.service: Deactivated successfully. Sep 4 17:07:38.643012 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:07:38.643443 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:07:38.645538 systemd-logind[1525]: Removed session 16. Sep 4 17:07:38.676624 sshd[5365]: Accepted publickey for core from 10.0.0.1 port 46690 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:38.677953 sshd[5365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:38.681744 systemd-logind[1525]: New session 17 of user core. Sep 4 17:07:38.691405 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:07:38.822342 sshd[5365]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:38.825749 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:46690.service: Deactivated successfully. Sep 4 17:07:38.828514 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:07:38.829356 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:07:38.830821 systemd-logind[1525]: Removed session 17. Sep 4 17:07:43.836379 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:51516.service - OpenSSH per-connection server daemon (10.0.0.1:51516). Sep 4 17:07:43.879171 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 51516 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:43.876629 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:43.883490 systemd-logind[1525]: New session 18 of user core. Sep 4 17:07:43.888386 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:07:44.042265 sshd[5403]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:44.045702 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:51516.service: Deactivated successfully. Sep 4 17:07:44.049417 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:07:44.049634 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:07:44.050871 systemd-logind[1525]: Removed session 18. Sep 4 17:07:44.304957 containerd[1545]: time="2024-09-04T17:07:44.304285530Z" level=info msg="StopPodSandbox for \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\"" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.346 [WARNING][5433] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbgrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bc96ff6-744d-455a-9a38-773fca98cdc6", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c", Pod:"csi-node-driver-tbgrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82bbbbeb400", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.346 [INFO][5433] k8s.go 608: Cleaning up netns ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.346 [INFO][5433] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" iface="eth0" netns="" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.346 [INFO][5433] k8s.go 615: Releasing IP address(es) ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.346 [INFO][5433] utils.go 188: Calico CNI releasing IP address ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.374 [INFO][5441] ipam_plugin.go 417: Releasing address using handleID ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.374 [INFO][5441] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.374 [INFO][5441] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.392 [WARNING][5441] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.392 [INFO][5441] ipam_plugin.go 445: Releasing address using workloadID ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.394 [INFO][5441] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.400522 containerd[1545]: 2024-09-04 17:07:44.395 [INFO][5433] k8s.go 621: Teardown processing complete. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.400936 containerd[1545]: time="2024-09-04T17:07:44.400705248Z" level=info msg="TearDown network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\" successfully" Sep 4 17:07:44.400936 containerd[1545]: time="2024-09-04T17:07:44.400734968Z" level=info msg="StopPodSandbox for \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\" returns successfully" Sep 4 17:07:44.402333 containerd[1545]: time="2024-09-04T17:07:44.402289421Z" level=info msg="RemovePodSandbox for \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\"" Sep 4 17:07:44.410980 containerd[1545]: time="2024-09-04T17:07:44.402334021Z" level=info msg="Forcibly stopping sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\"" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.450 [WARNING][5466] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--tbgrz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3bc96ff6-744d-455a-9a38-773fca98cdc6", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a9c84255fa26d4b4af22d34aa1c6cc0bce9a66577db5b2bec1a5808658665d7c", Pod:"csi-node-driver-tbgrz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali82bbbbeb400", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.450 [INFO][5466] k8s.go 608: Cleaning up netns ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.450 [INFO][5466] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" iface="eth0" netns="" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.450 [INFO][5466] k8s.go 615: Releasing IP address(es) ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.450 [INFO][5466] utils.go 188: Calico CNI releasing IP address ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.469 [INFO][5474] ipam_plugin.go 417: Releasing address using handleID ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.470 [INFO][5474] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.470 [INFO][5474] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.479 [WARNING][5474] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.479 [INFO][5474] ipam_plugin.go 445: Releasing address using workloadID ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" HandleID="k8s-pod-network.42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Workload="localhost-k8s-csi--node--driver--tbgrz-eth0" Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.480 [INFO][5474] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.483736 containerd[1545]: 2024-09-04 17:07:44.482 [INFO][5466] k8s.go 621: Teardown processing complete. ContainerID="42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb" Sep 4 17:07:44.484139 containerd[1545]: time="2024-09-04T17:07:44.483771895Z" level=info msg="TearDown network for sandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\" successfully" Sep 4 17:07:44.493075 containerd[1545]: time="2024-09-04T17:07:44.492686569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:07:44.493075 containerd[1545]: time="2024-09-04T17:07:44.492789569Z" level=info msg="RemovePodSandbox \"42d7e2b2c56ccc0908516a91cb438dbd977fc7e6f09944fc6dfdfa8f61f51bdb\" returns successfully" Sep 4 17:07:44.494787 containerd[1545]: time="2024-09-04T17:07:44.493377774Z" level=info msg="StopPodSandbox for \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\"" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.552 [WARNING][5497] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kff8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"79483386-60ca-499b-a7a4-f16b7727423c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8", Pod:"coredns-5dd5756b68-kff8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali990de55b8c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.552 [INFO][5497] k8s.go 608: Cleaning up netns ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.552 [INFO][5497] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" iface="eth0" netns="" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.552 [INFO][5497] k8s.go 615: Releasing IP address(es) ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.552 [INFO][5497] utils.go 188: Calico CNI releasing IP address ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.578 [INFO][5505] ipam_plugin.go 417: Releasing address using handleID ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.578 [INFO][5505] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.578 [INFO][5505] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.588 [WARNING][5505] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.588 [INFO][5505] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.590 [INFO][5505] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.595461 containerd[1545]: 2024-09-04 17:07:44.592 [INFO][5497] k8s.go 621: Teardown processing complete. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.595461 containerd[1545]: time="2024-09-04T17:07:44.595289057Z" level=info msg="TearDown network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\" successfully" Sep 4 17:07:44.595461 containerd[1545]: time="2024-09-04T17:07:44.595313577Z" level=info msg="StopPodSandbox for \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\" returns successfully" Sep 4 17:07:44.598152 containerd[1545]: time="2024-09-04T17:07:44.598101601Z" level=info msg="RemovePodSandbox for \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\"" Sep 4 17:07:44.598618 containerd[1545]: time="2024-09-04T17:07:44.598297562Z" level=info msg="Forcibly stopping sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\"" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.633 [WARNING][5528] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--kff8p-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"79483386-60ca-499b-a7a4-f16b7727423c", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"95aa3b7a1535552c37cc2711edf13e216f31babc71b36b8b393df298779f2bf8", Pod:"coredns-5dd5756b68-kff8p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali990de55b8c9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.633 [INFO][5528] k8s.go 608: Cleaning up netns ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.633 [INFO][5528] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" iface="eth0" netns="" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.633 [INFO][5528] k8s.go 615: Releasing IP address(es) ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.633 [INFO][5528] utils.go 188: Calico CNI releasing IP address ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.658 [INFO][5536] ipam_plugin.go 417: Releasing address using handleID ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.658 [INFO][5536] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.658 [INFO][5536] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.667 [WARNING][5536] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.667 [INFO][5536] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" HandleID="k8s-pod-network.6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Workload="localhost-k8s-coredns--5dd5756b68--kff8p-eth0" Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.668 [INFO][5536] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.671685 containerd[1545]: 2024-09-04 17:07:44.670 [INFO][5528] k8s.go 621: Teardown processing complete. ContainerID="6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9" Sep 4 17:07:44.672103 containerd[1545]: time="2024-09-04T17:07:44.671722289Z" level=info msg="TearDown network for sandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\" successfully" Sep 4 17:07:44.674612 containerd[1545]: time="2024-09-04T17:07:44.674571593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:07:44.674666 containerd[1545]: time="2024-09-04T17:07:44.674646194Z" level=info msg="RemovePodSandbox \"6c9cb6411aea47f8894ce9b853a87080fbc246cc16415192a7b07e70b66f6ec9\" returns successfully" Sep 4 17:07:44.675532 containerd[1545]: time="2024-09-04T17:07:44.675242719Z" level=info msg="StopPodSandbox for \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\"" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.712 [WARNING][5559] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0", GenerateName:"calico-kube-controllers-589588c958-", Namespace:"calico-system", SelfLink:"", UID:"5ff23458-425e-4765-99fb-5da7a5135579", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589588c958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8", Pod:"calico-kube-controllers-589588c958-qszlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0063b7323ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.712 [INFO][5559] k8s.go 608: Cleaning up netns ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.712 [INFO][5559] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" iface="eth0" netns="" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.712 [INFO][5559] k8s.go 615: Releasing IP address(es) ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.712 [INFO][5559] utils.go 188: Calico CNI releasing IP address ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.731 [INFO][5566] ipam_plugin.go 417: Releasing address using handleID ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.731 [INFO][5566] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.731 [INFO][5566] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.741 [WARNING][5566] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.741 [INFO][5566] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.742 [INFO][5566] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.747154 containerd[1545]: 2024-09-04 17:07:44.744 [INFO][5559] k8s.go 621: Teardown processing complete. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.747154 containerd[1545]: time="2024-09-04T17:07:44.747075953Z" level=info msg="TearDown network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\" successfully" Sep 4 17:07:44.747154 containerd[1545]: time="2024-09-04T17:07:44.747111193Z" level=info msg="StopPodSandbox for \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\" returns successfully" Sep 4 17:07:44.748090 containerd[1545]: time="2024-09-04T17:07:44.747776639Z" level=info msg="RemovePodSandbox for \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\"" Sep 4 17:07:44.748090 containerd[1545]: time="2024-09-04T17:07:44.747813999Z" level=info msg="Forcibly stopping sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\"" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.790 [WARNING][5590] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0", GenerateName:"calico-kube-controllers-589588c958-", Namespace:"calico-system", SelfLink:"", UID:"5ff23458-425e-4765-99fb-5da7a5135579", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 7, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589588c958", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ea645d054b9627948f7c51be217eb0d64201ccb057ad43fae0be057fb02a8f8", Pod:"calico-kube-controllers-589588c958-qszlt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0063b7323ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.791 [INFO][5590] k8s.go 608: Cleaning up netns ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.791 [INFO][5590] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" iface="eth0" netns="" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.791 [INFO][5590] k8s.go 615: Releasing IP address(es) ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.791 [INFO][5590] utils.go 188: Calico CNI releasing IP address ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.811 [INFO][5598] ipam_plugin.go 417: Releasing address using handleID ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.811 [INFO][5598] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.811 [INFO][5598] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.820 [WARNING][5598] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.820 [INFO][5598] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" HandleID="k8s-pod-network.b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Workload="localhost-k8s-calico--kube--controllers--589588c958--qszlt-eth0" Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.823 [INFO][5598] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.827025 containerd[1545]: 2024-09-04 17:07:44.825 [INFO][5590] k8s.go 621: Teardown processing complete. ContainerID="b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011" Sep 4 17:07:44.827432 containerd[1545]: time="2024-09-04T17:07:44.827070054Z" level=info msg="TearDown network for sandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\" successfully" Sep 4 17:07:44.829992 containerd[1545]: time="2024-09-04T17:07:44.829951438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:07:44.830044 containerd[1545]: time="2024-09-04T17:07:44.830023839Z" level=info msg="RemovePodSandbox \"b65a44f7a86b36566065887fff9e834dcceda7a41a2f1571973c02f42a40c011\" returns successfully" Sep 4 17:07:44.830929 containerd[1545]: time="2024-09-04T17:07:44.830549963Z" level=info msg="StopPodSandbox for \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\"" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.872 [WARNING][5620] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--q2cb6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5b7b310b-92a2-4d6e-b549-69ce5288993d", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96", Pod:"coredns-5dd5756b68-q2cb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib27e72d11ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.872 [INFO][5620] k8s.go 608: Cleaning up netns ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.872 [INFO][5620] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" iface="eth0" netns="" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.872 [INFO][5620] k8s.go 615: Releasing IP address(es) ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.872 [INFO][5620] utils.go 188: Calico CNI releasing IP address ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.890 [INFO][5628] ipam_plugin.go 417: Releasing address using handleID ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.891 [INFO][5628] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.891 [INFO][5628] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.899 [WARNING][5628] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.899 [INFO][5628] ipam_plugin.go 445: Releasing address using workloadID ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.901 [INFO][5628] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.904299 containerd[1545]: 2024-09-04 17:07:44.902 [INFO][5620] k8s.go 621: Teardown processing complete. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.904825 containerd[1545]: time="2024-09-04T17:07:44.904347454Z" level=info msg="TearDown network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\" successfully" Sep 4 17:07:44.904825 containerd[1545]: time="2024-09-04T17:07:44.904374374Z" level=info msg="StopPodSandbox for \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\" returns successfully" Sep 4 17:07:44.905412 containerd[1545]: time="2024-09-04T17:07:44.905037779Z" level=info msg="RemovePodSandbox for \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\"" Sep 4 17:07:44.905412 containerd[1545]: time="2024-09-04T17:07:44.905079580Z" level=info msg="Forcibly stopping sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\"" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.940 [WARNING][5651] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--q2cb6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5b7b310b-92a2-4d6e-b549-69ce5288993d", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 6, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3449af29c191c84cfceeb85fcc1a373fcf0f2217ea2d7b766bae674870071f96", Pod:"coredns-5dd5756b68-q2cb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib27e72d11ee", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.940 [INFO][5651] k8s.go 608: Cleaning up netns ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.940 [INFO][5651] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" iface="eth0" netns="" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.940 [INFO][5651] k8s.go 615: Releasing IP address(es) ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.940 [INFO][5651] utils.go 188: Calico CNI releasing IP address ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.958 [INFO][5659] ipam_plugin.go 417: Releasing address using handleID ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.958 [INFO][5659] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.958 [INFO][5659] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.966 [WARNING][5659] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.966 [INFO][5659] ipam_plugin.go 445: Releasing address using workloadID ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" HandleID="k8s-pod-network.76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Workload="localhost-k8s-coredns--5dd5756b68--q2cb6-eth0" Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.967 [INFO][5659] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:07:44.970706 containerd[1545]: 2024-09-04 17:07:44.969 [INFO][5651] k8s.go 621: Teardown processing complete. ContainerID="76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571" Sep 4 17:07:44.971180 containerd[1545]: time="2024-09-04T17:07:44.970749243Z" level=info msg="TearDown network for sandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\" successfully" Sep 4 17:07:44.973476 containerd[1545]: time="2024-09-04T17:07:44.973428505Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:07:44.973560 containerd[1545]: time="2024-09-04T17:07:44.973496586Z" level=info msg="RemovePodSandbox \"76ff50b0924ffc2e0974bfce961b22cdb1218c583c3940e60d0b6d1cf3903571\" returns successfully" Sep 4 17:07:44.974106 containerd[1545]: time="2024-09-04T17:07:44.973917869Z" level=info msg="StopPodSandbox for \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\"" Sep 4 17:07:44.974106 containerd[1545]: time="2024-09-04T17:07:44.974000870Z" level=info msg="TearDown network for sandbox \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" successfully" Sep 4 17:07:44.974106 containerd[1545]: time="2024-09-04T17:07:44.974040230Z" level=info msg="StopPodSandbox for \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" returns successfully" Sep 4 17:07:44.974549 containerd[1545]: time="2024-09-04T17:07:44.974403913Z" level=info msg="RemovePodSandbox for \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\"" Sep 4 17:07:44.974549 containerd[1545]: time="2024-09-04T17:07:44.974443273Z" level=info msg="Forcibly stopping sandbox \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\"" Sep 4 17:07:44.974549 containerd[1545]: time="2024-09-04T17:07:44.974506234Z" level=info msg="TearDown network for sandbox \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" successfully" Sep 4 17:07:44.978826 containerd[1545]: time="2024-09-04T17:07:44.978687228Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:07:44.978826 containerd[1545]: time="2024-09-04T17:07:44.978752829Z" level=info msg="RemovePodSandbox \"4e98105d7b9f50aba56772453bca0bc9e0ecc00f25d5f564b6e92a18d8a5fbeb\" returns successfully" Sep 4 17:07:44.979117 containerd[1545]: time="2024-09-04T17:07:44.979084112Z" level=info msg="StopPodSandbox for \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\"" Sep 4 17:07:44.979220 containerd[1545]: time="2024-09-04T17:07:44.979171352Z" level=info msg="TearDown network for sandbox \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" successfully" Sep 4 17:07:44.979220 containerd[1545]: time="2024-09-04T17:07:44.979211913Z" level=info msg="StopPodSandbox for \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" returns successfully" Sep 4 17:07:44.981039 containerd[1545]: time="2024-09-04T17:07:44.979537915Z" level=info msg="RemovePodSandbox for \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\"" Sep 4 17:07:44.981039 containerd[1545]: time="2024-09-04T17:07:44.979562756Z" level=info msg="Forcibly stopping sandbox \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\"" Sep 4 17:07:44.981039 containerd[1545]: time="2024-09-04T17:07:44.979644036Z" level=info msg="TearDown network for sandbox \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" successfully" Sep 4 17:07:44.984943 containerd[1545]: time="2024-09-04T17:07:44.984886040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:07:44.985027 containerd[1545]: time="2024-09-04T17:07:44.984953120Z" level=info msg="RemovePodSandbox \"fbb839990072419f38afad83267321bda37f104414003dd9c75f3ea37e327408\" returns successfully" Sep 4 17:07:49.053407 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:51518.service - OpenSSH per-connection server daemon (10.0.0.1:51518). Sep 4 17:07:49.088939 sshd[5683]: Accepted publickey for core from 10.0.0.1 port 51518 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:49.090363 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:49.094002 systemd-logind[1525]: New session 19 of user core. Sep 4 17:07:49.103552 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:07:49.234728 sshd[5683]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:49.237943 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:51518.service: Deactivated successfully. Sep 4 17:07:49.240009 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:07:49.240837 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:07:49.241851 systemd-logind[1525]: Removed session 19. Sep 4 17:07:54.249484 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:58696.service - OpenSSH per-connection server daemon (10.0.0.1:58696). Sep 4 17:07:54.284913 sshd[5699]: Accepted publickey for core from 10.0.0.1 port 58696 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:54.286377 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:54.290072 systemd-logind[1525]: New session 20 of user core. Sep 4 17:07:54.295509 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:07:54.405763 sshd[5699]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:54.409418 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:58696.service: Deactivated successfully. Sep 4 17:07:54.411698 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:07:54.411709 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:07:54.413293 systemd-logind[1525]: Removed session 20. Sep 4 17:07:56.903736 kubelet[2666]: E0904 17:07:56.903016 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:07:59.416418 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:58702.service - OpenSSH per-connection server daemon (10.0.0.1:58702). Sep 4 17:07:59.454906 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 58702 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:07:59.456544 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:07:59.460977 systemd-logind[1525]: New session 21 of user core. Sep 4 17:07:59.478435 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:07:59.595343 sshd[5742]: pam_unix(sshd:session): session closed for user core Sep 4 17:07:59.598313 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:58702.service: Deactivated successfully. Sep 4 17:07:59.600822 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:07:59.607646 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:07:59.608884 systemd-logind[1525]: Removed session 21.