Sep 4 17:11:47.957822 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:11:47.957845 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:11:47.957856 kernel: KASLR enabled Sep 4 17:11:47.957862 kernel: efi: EFI v2.7 by EDK II Sep 4 17:11:47.957868 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:11:47.957874 kernel: random: crng init done Sep 4 17:11:47.957882 kernel: ACPI: Early table checksum verification disabled Sep 4 17:11:47.957888 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:11:47.957895 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:11:47.957903 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957910 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957916 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957922 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957929 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957936 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957945 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957952 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957958 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:11:47.957965 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:11:47.957972 kernel: NUMA: Failed to initialise from firmware Sep 4 17:11:47.957979 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:11:47.957985 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 4 17:11:47.957992 kernel: Zone ranges: Sep 4 17:11:47.957999 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:11:47.958005 kernel: DMA32 empty Sep 4 17:11:47.958013 kernel: Normal empty Sep 4 17:11:47.958020 kernel: Movable zone start for each node Sep 4 17:11:47.958026 kernel: Early memory node ranges Sep 4 17:11:47.958033 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:11:47.958040 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:11:47.958047 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:11:47.958053 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:11:47.958060 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:11:47.958067 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:11:47.958073 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:11:47.958080 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:11:47.958086 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:11:47.958094 kernel: psci: probing for conduit method from ACPI. Sep 4 17:11:47.958101 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:11:47.958108 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:11:47.958117 kernel: psci: Trusted OS migration not required Sep 4 17:11:47.958124 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:11:47.958131 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:11:47.958140 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:11:47.958147 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:11:47.958154 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:11:47.958161 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:11:47.958169 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:11:47.958176 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:11:47.958183 kernel: CPU features: detected: Spectre-v4 Sep 4 17:11:47.958190 kernel: CPU features: detected: Spectre-BHB Sep 4 17:11:47.958197 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:11:47.958205 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:11:47.958213 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:11:47.958221 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:11:47.958228 kernel: alternatives: applying boot alternatives Sep 4 17:11:47.958236 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:11:47.958244 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:11:47.958251 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:11:47.958258 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:11:47.958265 kernel: Fallback order for Node 0: 0 Sep 4 17:11:47.958272 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:11:47.958279 kernel: Policy zone: DMA Sep 4 17:11:47.958286 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:11:47.958294 kernel: software IO TLB: area num 4. Sep 4 17:11:47.958302 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:11:47.958309 kernel: Memory: 2386596K/2572288K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 185692K reserved, 0K cma-reserved) Sep 4 17:11:47.958317 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:11:47.958324 kernel: trace event string verifier disabled Sep 4 17:11:47.958331 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:11:47.958338 kernel: rcu: RCU event tracing is enabled. Sep 4 17:11:47.958346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:11:47.958353 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:11:47.958360 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:11:47.958367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:11:47.958374 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:11:47.958383 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:11:47.958390 kernel: GICv3: 256 SPIs implemented Sep 4 17:11:47.958397 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:11:47.958404 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:11:47.958412 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:11:47.958419 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:11:47.958426 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:11:47.958433 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:11:47.958441 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:11:47.958448 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:11:47.958455 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:11:47.958472 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:11:47.958479 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:11:47.958487 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:11:47.958494 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:11:47.958501 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:11:47.958508 kernel: arm-pv: using stolen time PV Sep 4 17:11:47.958516 kernel: Console: colour dummy device 80x25 Sep 4 17:11:47.958523 kernel: ACPI: Core revision 20230628 Sep 4 17:11:47.958530 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:11:47.958538 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:11:47.958547 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:11:47.958554 kernel: landlock: Up and running. Sep 4 17:11:47.958561 kernel: SELinux: Initializing. Sep 4 17:11:47.958568 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:11:47.958576 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:11:47.958583 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:11:47.958590 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:11:47.958598 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:11:47.958605 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:11:47.958614 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:11:47.958621 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:11:47.958628 kernel: Remapping and enabling EFI services. Sep 4 17:11:47.958635 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:11:47.958643 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:11:47.958650 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:11:47.958658 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:11:47.958665 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:11:47.958672 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:11:47.958699 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:11:47.958710 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:11:47.958717 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:11:47.958730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:11:47.958739 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:11:47.958747 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:11:47.958754 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:11:47.958762 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:11:47.958770 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:11:47.958777 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:11:47.958786 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:11:47.958794 kernel: SMP: Total of 4 processors activated. Sep 4 17:11:47.958806 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:11:47.958816 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:11:47.958824 kernel: CPU features: detected: Common not Private translations Sep 4 17:11:47.958832 kernel: CPU features: detected: CRC32 instructions Sep 4 17:11:47.958840 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:11:47.958848 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:11:47.958858 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:11:47.958866 kernel: CPU features: detected: Privileged Access Never Sep 4 17:11:47.958873 kernel: CPU features: detected: RAS Extension Support Sep 4 17:11:47.958881 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:11:47.958889 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:11:47.958897 kernel: alternatives: applying system-wide alternatives Sep 4 17:11:47.958904 kernel: devtmpfs: initialized Sep 4 17:11:47.958912 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:11:47.958920 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:11:47.958929 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:11:47.958937 kernel: SMBIOS 3.0.0 present. Sep 4 17:11:47.958944 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:11:47.958952 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:11:47.958960 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:11:47.958968 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:11:47.958976 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:11:47.958983 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:11:47.958991 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 4 17:11:47.959001 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:11:47.959009 kernel: cpuidle: using governor menu Sep 4 17:11:47.959018 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:11:47.959028 kernel: ASID allocator initialised with 32768 entries Sep 4 17:11:47.959037 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:11:47.959047 kernel: Serial: AMBA PL011 UART driver Sep 4 17:11:47.959057 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:11:47.959066 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:11:47.959075 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:11:47.959086 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:11:47.959094 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:11:47.959101 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:11:47.959109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:11:47.959117 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:11:47.959124 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:11:47.959132 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:11:47.959140 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:11:47.959147 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:11:47.959157 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:11:47.959164 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:11:47.959172 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:11:47.959180 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:11:47.959188 kernel: ACPI: Interpreter enabled Sep 4 17:11:47.959195 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:11:47.959203 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:11:47.959211 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:11:47.959218 kernel: printk: console [ttyAMA0] enabled Sep 4 17:11:47.959227 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:11:47.959368 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:11:47.959448 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:11:47.959541 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:11:47.959613 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:11:47.959682 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:11:47.959692 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:11:47.959703 kernel: PCI host bridge to bus 0000:00 Sep 4 17:11:47.959783 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:11:47.959859 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:11:47.959925 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:11:47.959989 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:11:47.960074 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:11:47.960154 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:11:47.960231 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:11:47.960303 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:11:47.960374 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:11:47.960445 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:11:47.960528 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:11:47.960603 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:11:47.960668 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:11:47.960733 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:11:47.960796 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:11:47.960814 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:11:47.960828 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:11:47.960837 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:11:47.960845 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:11:47.960853 kernel: iommu: Default domain type: Translated Sep 4 17:11:47.960861 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:11:47.960873 kernel: efivars: Registered efivars operations Sep 4 17:11:47.960881 kernel: vgaarb: loaded Sep 4 17:11:47.960888 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:11:47.960897 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:11:47.960905 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:11:47.960913 kernel: pnp: PnP ACPI init Sep 4 17:11:47.961007 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:11:47.961019 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:11:47.961029 kernel: NET: Registered PF_INET protocol family Sep 4 17:11:47.961037 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:11:47.961045 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:11:47.961053 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:11:47.961062 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:11:47.961070 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:11:47.961080 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:11:47.961089 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:11:47.961099 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:11:47.961110 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:11:47.961118 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:11:47.961126 kernel: kvm [1]: HYP mode not available Sep 4 17:11:47.961134 kernel: Initialise system trusted keyrings Sep 4 17:11:47.961142 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:11:47.961150 kernel: Key type asymmetric registered Sep 4 17:11:47.961158 kernel: Asymmetric key parser 'x509' registered Sep 4 17:11:47.961166 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:11:47.961174 kernel: io scheduler mq-deadline registered Sep 4 17:11:47.961183 kernel: io scheduler kyber registered Sep 4 17:11:47.961191 kernel: io scheduler bfq registered Sep 4 17:11:47.961199 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:11:47.961206 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:11:47.961215 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:11:47.961296 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:11:47.961307 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:11:47.961315 kernel: thunder_xcv, ver 1.0 Sep 4 17:11:47.961323 kernel: thunder_bgx, ver 1.0 Sep 4 17:11:47.961333 kernel: nicpf, ver 1.0 Sep 4 17:11:47.961340 kernel: nicvf, ver 1.0 Sep 4 17:11:47.961421 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:11:47.961511 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:11:47 UTC (1725469907) Sep 4 17:11:47.961523 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:11:47.961531 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:11:47.961538 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:11:47.961546 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:11:47.961557 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:11:47.961565 kernel: Segment Routing with IPv6 Sep 4 17:11:47.961573 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:11:47.961580 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:11:47.961588 kernel: Key type dns_resolver registered Sep 4 17:11:47.961596 kernel: registered taskstats version 1 Sep 4 17:11:47.961603 kernel: Loading compiled-in X.509 certificates Sep 4 17:11:47.961611 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:11:47.961619 kernel: Key type .fscrypt registered Sep 4 17:11:47.961628 kernel: Key type fscrypt-provisioning registered Sep 4 17:11:47.961636 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:11:47.961644 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:11:47.961651 kernel: ima: No architecture policies found Sep 4 17:11:47.961659 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:11:47.961667 kernel: clk: Disabling unused clocks Sep 4 17:11:47.961674 kernel: Freeing unused kernel memory: 39296K Sep 4 17:11:47.961682 kernel: Run /init as init process Sep 4 17:11:47.961689 kernel: with arguments: Sep 4 17:11:47.961699 kernel: /init Sep 4 17:11:47.961706 kernel: with environment: Sep 4 17:11:47.961714 kernel: HOME=/ Sep 4 17:11:47.961722 kernel: TERM=linux Sep 4 17:11:47.961729 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:11:47.961739 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:11:47.961749 systemd[1]: Detected virtualization kvm. Sep 4 17:11:47.961758 systemd[1]: Detected architecture arm64. Sep 4 17:11:47.961767 systemd[1]: Running in initrd. Sep 4 17:11:47.961775 systemd[1]: No hostname configured, using default hostname. Sep 4 17:11:47.961783 systemd[1]: Hostname set to . Sep 4 17:11:47.961792 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:11:47.961800 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:11:47.961817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:11:47.961826 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:11:47.961835 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:11:47.961846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:11:47.961855 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:11:47.961864 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:11:47.961874 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:11:47.961883 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:11:47.961891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:11:47.961900 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:47.961910 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:11:47.961918 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:11:47.961926 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:11:47.961935 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:11:47.961943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:11:47.961951 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:11:47.961960 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:11:47.961968 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:11:47.961978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:11:47.961986 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:11:47.961995 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:11:47.962003 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:11:47.962011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:11:47.962020 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:11:47.962028 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:11:47.962039 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:11:47.962047 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:11:47.962057 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:11:47.962066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:47.962074 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:11:47.962083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:11:47.962091 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:11:47.962120 systemd-journald[237]: Collecting audit messages is disabled. Sep 4 17:11:47.962142 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:11:47.962151 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:47.962162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:11:47.962171 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:11:47.962180 systemd-journald[237]: Journal started Sep 4 17:11:47.962199 systemd-journald[237]: Runtime Journal (/run/log/journal/0f1a3bbf136844428a1f446c510b6d60) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:11:47.943059 systemd-modules-load[238]: Inserted module 'overlay' Sep 4 17:11:47.965207 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:11:47.966674 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:11:47.970049 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 4 17:11:47.971156 kernel: Bridge firewalling registered Sep 4 17:11:47.977730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:11:47.979690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:11:47.981698 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:11:47.988364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:11:47.989688 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:11:47.992665 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:47.995849 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:11:47.998854 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:11:48.005909 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:11:48.008852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:11:48.012784 dracut-cmdline[274]: dracut-dracut-053 Sep 4 17:11:48.016404 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:11:48.042966 systemd-resolved[282]: Positive Trust Anchors: Sep 4 17:11:48.042989 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:11:48.043021 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:11:48.048294 systemd-resolved[282]: Defaulting to hostname 'linux'. Sep 4 17:11:48.051996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:11:48.054522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:48.101384 kernel: SCSI subsystem initialized Sep 4 17:11:48.108487 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:11:48.117505 kernel: iscsi: registered transport (tcp) Sep 4 17:11:48.131493 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:11:48.131548 kernel: QLogic iSCSI HBA Driver Sep 4 17:11:48.174859 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:11:48.187624 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:11:48.206696 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:11:48.206749 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:11:48.208374 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:11:48.254549 kernel: raid6: neonx8 gen() 15369 MB/s Sep 4 17:11:48.271501 kernel: raid6: neonx4 gen() 15142 MB/s Sep 4 17:11:48.288492 kernel: raid6: neonx2 gen() 12903 MB/s Sep 4 17:11:48.305489 kernel: raid6: neonx1 gen() 10283 MB/s Sep 4 17:11:48.322489 kernel: raid6: int64x8 gen() 6676 MB/s Sep 4 17:11:48.339489 kernel: raid6: int64x4 gen() 7211 MB/s Sep 4 17:11:48.356495 kernel: raid6: int64x2 gen() 5963 MB/s Sep 4 17:11:48.373490 kernel: raid6: int64x1 gen() 4965 MB/s Sep 4 17:11:48.373513 kernel: raid6: using algorithm neonx8 gen() 15369 MB/s Sep 4 17:11:48.390530 kernel: raid6: .... xor() 11690 MB/s, rmw enabled Sep 4 17:11:48.390552 kernel: raid6: using neon recovery algorithm Sep 4 17:11:48.396577 kernel: xor: measuring software checksum speed Sep 4 17:11:48.396599 kernel: 8regs : 19334 MB/sec Sep 4 17:11:48.397700 kernel: 32regs : 18971 MB/sec Sep 4 17:11:48.398541 kernel: arm64_neon : 27215 MB/sec Sep 4 17:11:48.398555 kernel: xor: using function: arm64_neon (27215 MB/sec) Sep 4 17:11:48.453510 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:11:48.464249 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:11:48.473710 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:11:48.487919 systemd-udevd[461]: Using default interface naming scheme 'v255'. Sep 4 17:11:48.491568 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:48.499627 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:11:48.517261 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 4 17:11:48.548736 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:11:48.559656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:11:48.602536 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:11:48.612626 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:11:48.631718 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:11:48.633246 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:11:48.635055 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:11:48.637455 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:11:48.645620 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:11:48.657535 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:11:48.659946 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:11:48.660180 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:11:48.670518 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:11:48.670572 kernel: GPT:9289727 != 19775487 Sep 4 17:11:48.670584 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:11:48.670594 kernel: GPT:9289727 != 19775487 Sep 4 17:11:48.670611 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:11:48.670623 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:11:48.669963 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:11:48.670084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:48.675599 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:11:48.677217 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:11:48.677374 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:48.679710 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:48.690743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:48.702476 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (517) Sep 4 17:11:48.702526 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (507) Sep 4 17:11:48.706087 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:11:48.709773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:48.721791 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:11:48.726654 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:11:48.730667 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:11:48.731851 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:11:48.752652 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:11:48.754683 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:11:48.764323 disk-uuid[553]: Primary Header is updated. Sep 4 17:11:48.764323 disk-uuid[553]: Secondary Entries is updated. Sep 4 17:11:48.764323 disk-uuid[553]: Secondary Header is updated. Sep 4 17:11:48.769508 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:11:48.786472 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:49.786488 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:11:49.786707 disk-uuid[554]: The operation has completed successfully. Sep 4 17:11:49.812686 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:11:49.812840 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:11:49.829639 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:11:49.833566 sh[576]: Success Sep 4 17:11:49.851517 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:11:49.881225 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:11:49.896935 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:11:49.899003 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:11:49.911004 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:11:49.911044 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:49.911055 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:11:49.912100 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:11:49.912759 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:11:49.917065 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:11:49.918486 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:11:49.919338 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:11:49.921939 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:11:49.933113 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:11:49.933181 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:49.933193 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:11:49.936666 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:11:49.948162 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:11:49.950579 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:11:49.958510 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:11:49.969650 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:11:50.024510 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:11:50.034681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:11:50.064314 systemd-networkd[767]: lo: Link UP Sep 4 17:11:50.064327 systemd-networkd[767]: lo: Gained carrier Sep 4 17:11:50.065177 systemd-networkd[767]: Enumeration completed Sep 4 17:11:50.065751 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:50.065754 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:11:50.066437 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:11:50.066505 systemd-networkd[767]: eth0: Link UP Sep 4 17:11:50.071400 ignition[677]: Ignition 2.19.0 Sep 4 17:11:50.066508 systemd-networkd[767]: eth0: Gained carrier Sep 4 17:11:50.071406 ignition[677]: Stage: fetch-offline Sep 4 17:11:50.066515 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:50.071438 ignition[677]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:50.070027 systemd[1]: Reached target network.target - Network. Sep 4 17:11:50.071447 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:11:50.071609 ignition[677]: parsed url from cmdline: "" Sep 4 17:11:50.071613 ignition[677]: no config URL provided Sep 4 17:11:50.071618 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:11:50.071625 ignition[677]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:11:50.071650 ignition[677]: op(1): [started] loading QEMU firmware config module Sep 4 17:11:50.071655 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:11:50.087430 ignition[677]: op(1): [finished] loading QEMU firmware config module Sep 4 17:11:50.089507 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:11:50.127132 ignition[677]: parsing config with SHA512: 4bd8f072294745694e2efab1476fd171e8e4f06b14f667a02a93996802f105caf24ab72db742ad3ee308d8a12181adb5365046512d59f8b2c4ae56a58c1d5368 Sep 4 17:11:50.131497 unknown[677]: fetched base config from "system" Sep 4 17:11:50.131507 unknown[677]: fetched user config from "qemu" Sep 4 17:11:50.131892 ignition[677]: fetch-offline: fetch-offline passed Sep 4 17:11:50.133998 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:11:50.131954 ignition[677]: Ignition finished successfully Sep 4 17:11:50.135374 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:11:50.142608 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:11:50.153519 ignition[772]: Ignition 2.19.0 Sep 4 17:11:50.153530 ignition[772]: Stage: kargs Sep 4 17:11:50.153709 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:50.153719 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:11:50.157044 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:11:50.154604 ignition[772]: kargs: kargs passed Sep 4 17:11:50.154650 ignition[772]: Ignition finished successfully Sep 4 17:11:50.163652 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:11:50.173519 ignition[780]: Ignition 2.19.0 Sep 4 17:11:50.173530 ignition[780]: Stage: disks Sep 4 17:11:50.173695 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:50.173705 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:11:50.174581 ignition[780]: disks: disks passed Sep 4 17:11:50.176887 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:11:50.174629 ignition[780]: Ignition finished successfully Sep 4 17:11:50.178309 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:11:50.179920 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:11:50.181478 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:11:50.183342 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:11:50.185186 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:11:50.187532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:11:50.201176 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:11:50.205614 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:11:50.213652 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:11:50.263487 kernel: EXT4-fs (vda9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:11:50.264003 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:11:50.265323 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:11:50.275535 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:11:50.277177 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:11:50.278539 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:11:50.278580 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:11:50.285098 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Sep 4 17:11:50.278601 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:11:50.288888 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:11:50.288907 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:50.288918 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:11:50.285550 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:11:50.290322 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:11:50.293487 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:11:50.294575 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:11:50.332607 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:11:50.336932 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:11:50.340800 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:11:50.344638 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:11:50.411552 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:11:50.421568 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:11:50.423098 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:11:50.428475 kernel: BTRFS info (device vda6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:11:50.443093 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:11:50.444847 ignition[911]: INFO : Ignition 2.19.0 Sep 4 17:11:50.444847 ignition[911]: INFO : Stage: mount Sep 4 17:11:50.444847 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:50.444847 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:11:50.449248 ignition[911]: INFO : mount: mount passed Sep 4 17:11:50.449248 ignition[911]: INFO : Ignition finished successfully Sep 4 17:11:50.446801 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:11:50.457597 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:11:50.909236 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:11:50.927763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:11:50.943186 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (924) Sep 4 17:11:50.943235 kernel: BTRFS info (device vda6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:11:50.943248 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:50.944779 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:11:50.948484 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:11:50.950072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:11:50.980991 ignition[941]: INFO : Ignition 2.19.0 Sep 4 17:11:50.980991 ignition[941]: INFO : Stage: files Sep 4 17:11:50.983366 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:50.983366 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:11:50.983366 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:11:50.983366 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:11:50.983366 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:11:50.989505 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:11:50.989505 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:11:50.989505 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:11:50.987932 unknown[941]: wrote ssh authorized keys file for user: core Sep 4 17:11:51.000237 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:11:51.000237 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:11:51.037317 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:11:51.094042 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:11:51.096196 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Sep 4 17:11:51.174785 systemd-networkd[767]: eth0: Gained IPv6LL Sep 4 17:11:51.468537 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:11:52.018660 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:11:52.018660 ignition[941]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:11:52.023203 ignition[941]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:11:52.025176 ignition[941]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:11:52.025176 ignition[941]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:11:52.025176 ignition[941]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:11:52.025176 ignition[941]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:11:52.025176 ignition[941]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:11:52.025176 ignition[941]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:11:52.025176 ignition[941]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:11:52.066951 ignition[941]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:11:52.071189 ignition[941]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:11:52.073074 ignition[941]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:11:52.073074 ignition[941]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:11:52.073074 ignition[941]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:11:52.073074 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:11:52.073074 ignition[941]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:11:52.073074 ignition[941]: INFO : files: files passed Sep 4 17:11:52.073074 ignition[941]: INFO : Ignition finished successfully Sep 4 17:11:52.074132 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:11:52.086625 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:11:52.090240 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:11:52.097532 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:11:52.099435 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:11:52.099502 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:11:52.102742 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:11:52.102742 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:11:52.105704 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:11:52.106495 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:11:52.108696 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:11:52.121655 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:11:52.144139 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:11:52.144250 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:11:52.146384 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:11:52.148818 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:11:52.150925 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:11:52.151753 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:11:52.168670 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:11:52.182700 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:11:52.190910 systemd[1]: Stopped target network.target - Network. Sep 4 17:11:52.191898 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:52.193672 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:11:52.196005 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:11:52.197935 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:11:52.198062 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:11:52.200842 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:11:52.202031 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:11:52.204065 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:11:52.206065 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:11:52.208043 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:11:52.210133 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:11:52.212141 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:11:52.214384 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:11:52.216232 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:11:52.218963 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:11:52.220605 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:11:52.220739 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:11:52.223279 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:52.225360 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:11:52.227396 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:11:52.231533 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:11:52.232845 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:11:52.232979 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:11:52.237175 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:11:52.237299 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:11:52.239492 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:11:52.240881 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:11:52.246543 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:11:52.247917 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:11:52.249884 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:11:52.251439 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:11:52.251549 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:11:52.253102 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:11:52.253188 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:11:52.254828 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:11:52.254948 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:11:52.256838 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:11:52.256947 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:11:52.267670 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:11:52.268726 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:11:52.268898 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:11:52.274712 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:11:52.276184 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:11:52.278822 ignition[995]: INFO : Ignition 2.19.0 Sep 4 17:11:52.278822 ignition[995]: INFO : Stage: umount Sep 4 17:11:52.280442 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:52.280442 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:11:52.280442 ignition[995]: INFO : umount: umount passed Sep 4 17:11:52.280442 ignition[995]: INFO : Ignition finished successfully Sep 4 17:11:52.281530 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:11:52.291495 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:11:52.291670 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:11:52.295193 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:11:52.295316 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:11:52.300934 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 4 17:11:52.301738 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:11:52.302600 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:11:52.302696 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:11:52.305682 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:11:52.305795 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:11:52.309954 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:11:52.310080 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:11:52.313122 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:11:52.313208 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:11:52.316593 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:11:52.316626 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:11:52.318329 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:11:52.318380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:11:52.320340 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:11:52.320387 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:11:52.322258 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:11:52.322309 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:11:52.324118 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:11:52.324163 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:11:52.341629 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:11:52.342534 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:11:52.342613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:11:52.344608 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:11:52.344657 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:11:52.346388 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:11:52.346433 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:11:52.348404 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:11:52.348448 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:11:52.350568 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:11:52.361197 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:11:52.361324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:11:52.364074 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:11:52.364207 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:52.366714 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:11:52.366772 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:11:52.369050 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:11:52.369087 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:11:52.372141 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:11:52.372191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:11:52.375348 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:11:52.375400 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:11:52.378311 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:11:52.378359 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:52.388644 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:11:52.389694 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:11:52.389755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:11:52.392055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:11:52.392104 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:52.394316 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:11:52.395503 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:11:52.397300 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:11:52.397384 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:11:52.400110 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:11:52.401795 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:11:52.401866 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:11:52.404857 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:11:52.419585 systemd[1]: Switching root. Sep 4 17:11:52.452895 systemd-journald[237]: Journal stopped Sep 4 17:11:53.302081 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 4 17:11:53.302140 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:11:53.302153 kernel: SELinux: policy capability open_perms=1 Sep 4 17:11:53.302164 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:11:53.302183 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:11:53.302193 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:11:53.302204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:11:53.302214 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:11:53.302224 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:11:53.302234 kernel: audit: type=1403 audit(1725469912.619:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:11:53.302245 systemd[1]: Successfully loaded SELinux policy in 40.300ms. Sep 4 17:11:53.302271 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.839ms. Sep 4 17:11:53.302285 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:11:53.302296 systemd[1]: Detected virtualization kvm. Sep 4 17:11:53.302307 systemd[1]: Detected architecture arm64. Sep 4 17:11:53.302317 systemd[1]: Detected first boot. Sep 4 17:11:53.302328 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:11:53.302340 zram_generator::config[1039]: No configuration found. Sep 4 17:11:53.302351 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:11:53.302362 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:11:53.302446 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:11:53.302484 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:11:53.302497 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:11:53.302509 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:11:53.302519 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:11:53.302530 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:11:53.302544 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:11:53.302555 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:11:53.302566 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:11:53.302579 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:11:53.302612 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:11:53.302627 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:11:53.302654 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:11:53.302668 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:11:53.302678 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:11:53.302689 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:11:53.302700 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:11:53.302711 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:11:53.302726 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:11:53.302737 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:11:53.302748 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:11:53.302759 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:11:53.302771 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:11:53.302788 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:11:53.302800 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:11:53.302812 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:11:53.302824 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:11:53.302837 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:11:53.302849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:11:53.302860 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:11:53.302871 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:11:53.302881 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:11:53.302892 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:11:53.302903 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:11:53.302914 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:11:53.302926 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:11:53.302937 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:11:53.302948 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:11:53.302962 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:11:53.302981 systemd[1]: Reached target machines.target - Containers. Sep 4 17:11:53.302996 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:11:53.303007 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:53.303018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:11:53.303036 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:11:53.303047 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:53.303059 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:11:53.303070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:53.303081 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:11:53.303093 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:53.303104 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:11:53.303115 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:11:53.303126 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:11:53.303138 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:11:53.303149 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:11:53.303160 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:11:53.303170 kernel: fuse: init (API version 7.39) Sep 4 17:11:53.303181 kernel: loop: module loaded Sep 4 17:11:53.303192 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:11:53.303203 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:11:53.303214 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:11:53.303224 kernel: ACPI: bus type drm_connector registered Sep 4 17:11:53.303238 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:11:53.303249 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:11:53.303260 systemd[1]: Stopped verity-setup.service. Sep 4 17:11:53.303270 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:11:53.303281 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:11:53.303317 systemd-journald[1105]: Collecting audit messages is disabled. Sep 4 17:11:53.303337 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:11:53.303350 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:11:53.303364 systemd-journald[1105]: Journal started Sep 4 17:11:53.303386 systemd-journald[1105]: Runtime Journal (/run/log/journal/0f1a3bbf136844428a1f446c510b6d60) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:11:53.053756 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:11:53.068167 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:11:53.068612 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:11:53.306672 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:11:53.307498 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:11:53.308876 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:11:53.311497 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:11:53.313103 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:11:53.315924 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:11:53.316084 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:11:53.317657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:53.317815 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:53.319287 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:11:53.319508 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:11:53.320886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:53.321023 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:53.322680 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:11:53.322850 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:11:53.324594 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:53.324733 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:53.326315 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:11:53.327867 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:11:53.329614 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:11:53.342947 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:11:53.357584 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:11:53.359914 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:11:53.361172 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:11:53.361222 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:11:53.363318 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:11:53.365977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:11:53.368620 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:11:53.369888 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:53.371768 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:11:53.374183 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:11:53.375524 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:11:53.379746 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:11:53.381112 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:11:53.382717 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:11:53.385749 systemd-journald[1105]: Time spent on flushing to /var/log/journal/0f1a3bbf136844428a1f446c510b6d60 is 27.327ms for 853 entries. Sep 4 17:11:53.385749 systemd-journald[1105]: System Journal (/var/log/journal/0f1a3bbf136844428a1f446c510b6d60) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:11:53.422284 systemd-journald[1105]: Received client request to flush runtime journal. Sep 4 17:11:53.422326 kernel: loop0: detected capacity change from 0 to 194096 Sep 4 17:11:53.391029 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:11:53.397721 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:11:53.400579 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:11:53.402869 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:11:53.404057 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:11:53.405601 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:11:53.408812 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:11:53.413835 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:11:53.426348 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:11:53.431815 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:11:53.439373 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:11:53.433664 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:11:53.441540 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:11:53.451285 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:11:53.459504 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:11:53.466711 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:11:53.469382 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:11:53.472022 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:11:53.479499 kernel: loop1: detected capacity change from 0 to 65520 Sep 4 17:11:53.503268 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Sep 4 17:11:53.503288 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Sep 4 17:11:53.508372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:11:53.514489 kernel: loop2: detected capacity change from 0 to 114288 Sep 4 17:11:53.570514 kernel: loop3: detected capacity change from 0 to 194096 Sep 4 17:11:53.588491 kernel: loop4: detected capacity change from 0 to 65520 Sep 4 17:11:53.608985 kernel: loop5: detected capacity change from 0 to 114288 Sep 4 17:11:53.619233 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:11:53.619666 (sd-merge)[1174]: Merged extensions into '/usr'. Sep 4 17:11:53.624914 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:11:53.624930 systemd[1]: Reloading... Sep 4 17:11:53.689706 zram_generator::config[1198]: No configuration found. Sep 4 17:11:53.727528 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:11:53.787099 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:53.824551 systemd[1]: Reloading finished in 199 ms. Sep 4 17:11:53.856502 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:11:53.857924 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:11:53.874793 systemd[1]: Starting ensure-sysext.service... Sep 4 17:11:53.877215 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:11:53.908877 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:11:53.909042 systemd[1]: Reloading... Sep 4 17:11:53.911760 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:11:53.912041 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:11:53.912741 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:11:53.912975 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Sep 4 17:11:53.913029 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Sep 4 17:11:53.915563 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:11:53.915575 systemd-tmpfiles[1234]: Skipping /boot Sep 4 17:11:53.922841 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:11:53.922857 systemd-tmpfiles[1234]: Skipping /boot Sep 4 17:11:53.957614 zram_generator::config[1259]: No configuration found. Sep 4 17:11:54.049579 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:54.087035 systemd[1]: Reloading finished in 177 ms. Sep 4 17:11:54.101909 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:11:54.110144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:11:54.117528 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:54.120207 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:11:54.122820 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:11:54.129450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:11:54.132679 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:11:54.137857 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:11:54.144162 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:11:54.146988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:54.153535 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:54.159028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:54.168292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:54.169561 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:54.170583 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:11:54.175617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:54.175825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:54.177573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:54.177734 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:54.179527 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:54.179698 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:54.187641 systemd-udevd[1301]: Using default interface naming scheme 'v255'. Sep 4 17:11:54.190428 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:54.218830 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:54.221783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:54.229934 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:54.231012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:54.237839 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:11:54.240918 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:54.244295 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:11:54.246235 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:11:54.249844 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:11:54.251547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:54.251688 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:54.253995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:54.254291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:54.257098 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:54.257231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:54.283663 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:11:54.288912 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:54.309492 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1339) Sep 4 17:11:54.312824 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:54.316107 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:11:54.329251 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:54.332679 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:54.334020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:54.341869 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1335) Sep 4 17:11:54.341927 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1339) Sep 4 17:11:54.342846 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:11:54.347970 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:11:54.349097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:54.352181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:54.354139 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:11:54.354359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:11:54.357514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:54.359496 augenrules[1367]: No rules Sep 4 17:11:54.359630 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:54.361667 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:54.364719 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:54.364910 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:54.367254 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:11:54.369518 systemd[1]: Finished ensure-sysext.service. Sep 4 17:11:54.385406 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:11:54.385519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:11:54.391765 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:11:54.395385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:11:54.410673 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:11:54.452422 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:11:54.453007 systemd-resolved[1299]: Positive Trust Anchors: Sep 4 17:11:54.453024 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:11:54.453058 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:11:54.461203 systemd-resolved[1299]: Defaulting to hostname 'linux'. Sep 4 17:11:54.462745 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:11:54.464290 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:54.475247 systemd-networkd[1366]: lo: Link UP Sep 4 17:11:54.475416 systemd-networkd[1366]: lo: Gained carrier Sep 4 17:11:54.476181 systemd-networkd[1366]: Enumeration completed Sep 4 17:11:54.482840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:54.484328 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:11:54.485746 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:54.485851 systemd-networkd[1366]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:11:54.486120 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:11:54.488188 systemd[1]: Reached target network.target - Network. Sep 4 17:11:54.488990 systemd-networkd[1366]: eth0: Link UP Sep 4 17:11:54.489071 systemd-networkd[1366]: eth0: Gained carrier Sep 4 17:11:54.489129 systemd-networkd[1366]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:54.489617 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:11:54.492167 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:11:54.505536 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:11:54.509742 systemd-networkd[1366]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:11:54.511075 systemd-timesyncd[1380]: Network configuration changed, trying to establish connection. Sep 4 17:11:54.978751 systemd-resolved[1299]: Clock change detected. Flushing caches. Sep 4 17:11:54.979210 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:11:54.979402 systemd-timesyncd[1380]: Initial clock synchronization to Wed 2024-09-04 17:11:54.978561 UTC. Sep 4 17:11:54.985459 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:11:55.009456 lvm[1393]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:55.018630 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:55.049885 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:11:55.051373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:55.052472 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:11:55.053607 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:11:55.054801 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:11:55.056135 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:11:55.057274 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:11:55.058557 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:11:55.059727 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:11:55.059763 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:11:55.060596 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:11:55.062237 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:11:55.064588 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:11:55.073209 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:11:55.075390 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:11:55.076886 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:11:55.077996 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:11:55.078939 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:11:55.079663 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:55.079692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:55.080678 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:11:55.082660 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:11:55.083536 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:55.086152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:11:55.092112 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:11:55.094539 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:11:55.098332 jq[1403]: false Sep 4 17:11:55.098409 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:11:55.102477 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:11:55.106497 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:11:55.108002 extend-filesystems[1404]: Found loop3 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found loop4 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found loop5 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda1 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda2 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda3 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found usr Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda4 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda6 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda7 Sep 4 17:11:55.108002 extend-filesystems[1404]: Found vda9 Sep 4 17:11:55.108002 extend-filesystems[1404]: Checking size of /dev/vda9 Sep 4 17:11:55.129286 extend-filesystems[1404]: Resized partition /dev/vda9 Sep 4 17:11:55.117756 dbus-daemon[1402]: [system] SELinux support is enabled Sep 4 17:11:55.111926 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:11:55.118340 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:11:55.124387 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:11:55.124950 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:11:55.131728 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:11:55.139606 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:11:55.141945 extend-filesystems[1424]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:11:55.142379 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:11:55.144227 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1335) Sep 4 17:11:55.145971 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:11:55.147390 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:11:55.148818 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:11:55.151270 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:11:55.151593 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:11:55.151685 jq[1425]: true Sep 4 17:11:55.151755 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:11:55.158292 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:11:55.158464 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:11:55.182873 (ntainerd)[1431]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:11:55.185264 jq[1429]: true Sep 4 17:11:55.190236 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:11:55.190294 tar[1428]: linux-arm64/helm Sep 4 17:11:55.203478 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:11:55.211413 update_engine[1423]: I0904 17:11:55.205947 1423 main.cc:92] Flatcar Update Engine starting Sep 4 17:11:55.211413 update_engine[1423]: I0904 17:11:55.208703 1423 update_check_scheduler.cc:74] Next update check in 4m45s Sep 4 17:11:55.203505 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:11:55.216638 extend-filesystems[1424]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:11:55.216638 extend-filesystems[1424]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:11:55.216638 extend-filesystems[1424]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:11:55.204957 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:11:55.231263 extend-filesystems[1404]: Resized filesystem in /dev/vda9 Sep 4 17:11:55.204974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:11:55.208639 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:11:55.217360 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:11:55.220273 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:11:55.220648 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:11:55.220852 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:11:55.222481 systemd-logind[1417]: New seat seat0. Sep 4 17:11:55.227297 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:11:55.262361 bash[1458]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:11:55.263838 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:11:55.265899 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:11:55.322456 locksmithd[1443]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:11:55.353067 sshd_keygen[1422]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:11:55.373238 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:11:55.386536 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:11:55.394107 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:11:55.394487 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:11:55.407226 containerd[1431]: time="2024-09-04T17:11:55.404161067Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:11:55.409521 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:11:55.420569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:11:55.427516 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:11:55.430345 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:11:55.432057 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:11:55.434225 containerd[1431]: time="2024-09-04T17:11:55.434170827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:55.435521 containerd[1431]: time="2024-09-04T17:11:55.435459267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:55.435521 containerd[1431]: time="2024-09-04T17:11:55.435493867Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:11:55.435521 containerd[1431]: time="2024-09-04T17:11:55.435516587Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:11:55.435699 containerd[1431]: time="2024-09-04T17:11:55.435679067Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:11:55.435724 containerd[1431]: time="2024-09-04T17:11:55.435703907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:55.435774 containerd[1431]: time="2024-09-04T17:11:55.435759867Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:55.435798 containerd[1431]: time="2024-09-04T17:11:55.435775187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:55.435961 containerd[1431]: time="2024-09-04T17:11:55.435931587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:55.435961 containerd[1431]: time="2024-09-04T17:11:55.435952427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:55.436009 containerd[1431]: time="2024-09-04T17:11:55.435966227Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:55.436009 containerd[1431]: time="2024-09-04T17:11:55.435976267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:55.436080 containerd[1431]: time="2024-09-04T17:11:55.436063827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:55.436286 containerd[1431]: time="2024-09-04T17:11:55.436268387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:55.436393 containerd[1431]: time="2024-09-04T17:11:55.436376187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:55.436418 containerd[1431]: time="2024-09-04T17:11:55.436393147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:11:55.436493 containerd[1431]: time="2024-09-04T17:11:55.436477787Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:11:55.436552 containerd[1431]: time="2024-09-04T17:11:55.436538027Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:11:55.439692 containerd[1431]: time="2024-09-04T17:11:55.439659627Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:11:55.439845 containerd[1431]: time="2024-09-04T17:11:55.439708307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:11:55.439845 containerd[1431]: time="2024-09-04T17:11:55.439725507Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:11:55.439845 containerd[1431]: time="2024-09-04T17:11:55.439745467Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:11:55.439845 containerd[1431]: time="2024-09-04T17:11:55.439766627Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:11:55.440011 containerd[1431]: time="2024-09-04T17:11:55.439921707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:11:55.440404 containerd[1431]: time="2024-09-04T17:11:55.440372827Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:11:55.440756 containerd[1431]: time="2024-09-04T17:11:55.440737227Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:11:55.440897 containerd[1431]: time="2024-09-04T17:11:55.440833427Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:11:55.440897 containerd[1431]: time="2024-09-04T17:11:55.440853467Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:11:55.440897 containerd[1431]: time="2024-09-04T17:11:55.440870387Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441120 containerd[1431]: time="2024-09-04T17:11:55.440883507Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441120 containerd[1431]: time="2024-09-04T17:11:55.441057347Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441120 containerd[1431]: time="2024-09-04T17:11:55.441074627Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441120 containerd[1431]: time="2024-09-04T17:11:55.441089747Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441364 containerd[1431]: time="2024-09-04T17:11:55.441103587Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441364 containerd[1431]: time="2024-09-04T17:11:55.441273427Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441364 containerd[1431]: time="2024-09-04T17:11:55.441287627Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:11:55.441364 containerd[1431]: time="2024-09-04T17:11:55.441308387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.441756 containerd[1431]: time="2024-09-04T17:11:55.441524427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.441756 containerd[1431]: time="2024-09-04T17:11:55.441555307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.441756 containerd[1431]: time="2024-09-04T17:11:55.441568707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442018 containerd[1431]: time="2024-09-04T17:11:55.441872827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442018 containerd[1431]: time="2024-09-04T17:11:55.441911027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442018 containerd[1431]: time="2024-09-04T17:11:55.441925067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442018 containerd[1431]: time="2024-09-04T17:11:55.441950467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442018 containerd[1431]: time="2024-09-04T17:11:55.441965147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442018 containerd[1431]: time="2024-09-04T17:11:55.441986067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442018 containerd[1431]: time="2024-09-04T17:11:55.441998187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442259 containerd[1431]: time="2024-09-04T17:11:55.442099107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442259 containerd[1431]: time="2024-09-04T17:11:55.442119827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442259 containerd[1431]: time="2024-09-04T17:11:55.442137587Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:11:55.442564 containerd[1431]: time="2024-09-04T17:11:55.442162067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442564 containerd[1431]: time="2024-09-04T17:11:55.442350107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.442564 containerd[1431]: time="2024-09-04T17:11:55.442364547Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443334707Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443371067Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443383347Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443395147Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443404427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443416947Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443426907Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:11:55.444240 containerd[1431]: time="2024-09-04T17:11:55.443437067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:11:55.444429 containerd[1431]: time="2024-09-04T17:11:55.443729027Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:11:55.444429 containerd[1431]: time="2024-09-04T17:11:55.443787387Z" level=info msg="Connect containerd service" Sep 4 17:11:55.444429 containerd[1431]: time="2024-09-04T17:11:55.443888867Z" level=info msg="using legacy CRI server" Sep 4 17:11:55.444429 containerd[1431]: time="2024-09-04T17:11:55.443895787Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:11:55.444429 containerd[1431]: time="2024-09-04T17:11:55.443977147Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:11:55.445682 containerd[1431]: time="2024-09-04T17:11:55.445600747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:11:55.446063 containerd[1431]: time="2024-09-04T17:11:55.446028867Z" level=info msg="Start subscribing containerd event" Sep 4 17:11:55.446229 containerd[1431]: time="2024-09-04T17:11:55.446212667Z" level=info msg="Start recovering state" Sep 4 17:11:55.446404 containerd[1431]: time="2024-09-04T17:11:55.446388027Z" level=info msg="Start event monitor" Sep 4 17:11:55.446542 containerd[1431]: time="2024-09-04T17:11:55.446525787Z" level=info msg="Start snapshots syncer" Sep 4 17:11:55.446602 containerd[1431]: time="2024-09-04T17:11:55.446589787Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:11:55.446704 containerd[1431]: time="2024-09-04T17:11:55.446688707Z" level=info msg="Start streaming server" Sep 4 17:11:55.447148 containerd[1431]: time="2024-09-04T17:11:55.447118747Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:11:55.447197 containerd[1431]: time="2024-09-04T17:11:55.447180867Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:11:55.447690 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:11:55.450556 containerd[1431]: time="2024-09-04T17:11:55.450497747Z" level=info msg="containerd successfully booted in 0.047748s" Sep 4 17:11:55.565813 tar[1428]: linux-arm64/LICENSE Sep 4 17:11:55.566043 tar[1428]: linux-arm64/README.md Sep 4 17:11:55.577989 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:11:56.185364 systemd-networkd[1366]: eth0: Gained IPv6LL Sep 4 17:11:56.187114 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:11:56.189561 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:11:56.199486 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:11:56.202076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:56.204285 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:11:56.218816 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:11:56.218988 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:11:56.221011 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:11:56.225397 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:11:56.724619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:56.726421 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:11:56.728707 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:56.732287 systemd[1]: Startup finished in 570ms (kernel) + 4.890s (initrd) + 3.693s (userspace) = 9.155s. Sep 4 17:11:57.224738 kubelet[1516]: E0904 17:11:57.224624 1516 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:57.227066 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:57.227242 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:12:00.925797 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:12:00.926944 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:56902.service - OpenSSH per-connection server daemon (10.0.0.1:56902). Sep 4 17:12:01.021699 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 56902 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:12:01.023414 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:12:01.030897 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:12:01.045441 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:12:01.046926 systemd-logind[1417]: New session 1 of user core. Sep 4 17:12:01.054356 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:12:01.056468 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:12:01.062924 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:01.134914 systemd[1534]: Queued start job for default target default.target. Sep 4 17:12:01.145120 systemd[1534]: Created slice app.slice - User Application Slice. Sep 4 17:12:01.145316 systemd[1534]: Reached target paths.target - Paths. Sep 4 17:12:01.145432 systemd[1534]: Reached target timers.target - Timers. Sep 4 17:12:01.146728 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:12:01.156556 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:12:01.156616 systemd[1534]: Reached target sockets.target - Sockets. Sep 4 17:12:01.156628 systemd[1534]: Reached target basic.target - Basic System. Sep 4 17:12:01.156663 systemd[1534]: Reached target default.target - Main User Target. Sep 4 17:12:01.156689 systemd[1534]: Startup finished in 88ms. Sep 4 17:12:01.156958 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:12:01.158226 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:12:01.218883 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:56914.service - OpenSSH per-connection server daemon (10.0.0.1:56914). Sep 4 17:12:01.254157 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 56914 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:12:01.255433 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:12:01.259267 systemd-logind[1417]: New session 2 of user core. Sep 4 17:12:01.269405 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:12:01.320127 sshd[1545]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:01.334431 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:56914.service: Deactivated successfully. Sep 4 17:12:01.336127 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:12:01.338353 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:12:01.348968 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:56922.service - OpenSSH per-connection server daemon (10.0.0.1:56922). Sep 4 17:12:01.350315 systemd-logind[1417]: Removed session 2. Sep 4 17:12:01.382437 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 56922 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:12:01.383752 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:12:01.388819 systemd-logind[1417]: New session 3 of user core. Sep 4 17:12:01.399404 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:12:01.450046 sshd[1552]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:01.469714 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:56922.service: Deactivated successfully. Sep 4 17:12:01.473300 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:12:01.475044 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:12:01.476284 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:56938.service - OpenSSH per-connection server daemon (10.0.0.1:56938). Sep 4 17:12:01.477083 systemd-logind[1417]: Removed session 3. Sep 4 17:12:01.511813 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 56938 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:12:01.513022 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:12:01.516937 systemd-logind[1417]: New session 4 of user core. Sep 4 17:12:01.527371 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:12:01.580707 sshd[1560]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:01.593632 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:56938.service: Deactivated successfully. Sep 4 17:12:01.595027 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:12:01.596277 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:12:01.597360 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:56946.service - OpenSSH per-connection server daemon (10.0.0.1:56946). Sep 4 17:12:01.598429 systemd-logind[1417]: Removed session 4. Sep 4 17:12:01.632856 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 56946 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:12:01.634164 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:12:01.637710 systemd-logind[1417]: New session 5 of user core. Sep 4 17:12:01.647426 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:12:01.715994 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:12:01.716309 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:12:01.733953 sudo[1570]: pam_unix(sudo:session): session closed for user root Sep 4 17:12:01.736052 sshd[1567]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:01.745645 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:56946.service: Deactivated successfully. Sep 4 17:12:01.747281 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:12:01.751287 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:12:01.756445 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:56950.service - OpenSSH per-connection server daemon (10.0.0.1:56950). Sep 4 17:12:01.757306 systemd-logind[1417]: Removed session 5. Sep 4 17:12:01.790325 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 56950 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:12:01.790904 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:12:01.795915 systemd-logind[1417]: New session 6 of user core. Sep 4 17:12:01.805365 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:12:01.858831 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:12:01.859142 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:12:01.862417 sudo[1579]: pam_unix(sudo:session): session closed for user root Sep 4 17:12:01.867562 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:12:01.867878 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:12:01.890622 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:12:01.892032 auditctl[1582]: No rules Sep 4 17:12:01.892894 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:12:01.893099 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:12:01.894999 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:12:01.919833 augenrules[1600]: No rules Sep 4 17:12:01.920608 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:12:01.921930 sudo[1578]: pam_unix(sudo:session): session closed for user root Sep 4 17:12:01.923603 sshd[1575]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:01.938680 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:56950.service: Deactivated successfully. Sep 4 17:12:01.940417 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:12:01.942037 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:12:01.955574 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:56964.service - OpenSSH per-connection server daemon (10.0.0.1:56964). Sep 4 17:12:01.956409 systemd-logind[1417]: Removed session 6. Sep 4 17:12:01.987042 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 56964 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:12:01.988181 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:12:01.991846 systemd-logind[1417]: New session 7 of user core. Sep 4 17:12:02.002347 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:12:02.053534 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:12:02.053821 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:12:02.224499 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:12:02.224591 (dockerd)[1622]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:12:02.524535 dockerd[1622]: time="2024-09-04T17:12:02.524447867Z" level=info msg="Starting up" Sep 4 17:12:02.712410 dockerd[1622]: time="2024-09-04T17:12:02.712367427Z" level=info msg="Loading containers: start." Sep 4 17:12:02.825256 kernel: Initializing XFRM netlink socket Sep 4 17:12:02.888180 systemd-networkd[1366]: docker0: Link UP Sep 4 17:12:02.909756 dockerd[1622]: time="2024-09-04T17:12:02.909715747Z" level=info msg="Loading containers: done." Sep 4 17:12:02.923289 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2519239239-merged.mount: Deactivated successfully. Sep 4 17:12:02.923682 dockerd[1622]: time="2024-09-04T17:12:02.923625947Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:12:02.923753 dockerd[1622]: time="2024-09-04T17:12:02.923732947Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:12:02.924195 dockerd[1622]: time="2024-09-04T17:12:02.923835027Z" level=info msg="Daemon has completed initialization" Sep 4 17:12:02.958758 dockerd[1622]: time="2024-09-04T17:12:02.958621027Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:12:02.958863 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:12:03.580336 containerd[1431]: time="2024-09-04T17:12:03.580221987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:12:04.434269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3461038918.mount: Deactivated successfully. Sep 4 17:12:06.892249 containerd[1431]: time="2024-09-04T17:12:06.892149987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:06.894970 containerd[1431]: time="2024-09-04T17:12:06.894895267Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=29943742" Sep 4 17:12:06.896019 containerd[1431]: time="2024-09-04T17:12:06.895684867Z" level=info msg="ImageCreate event name:\"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:06.899122 containerd[1431]: time="2024-09-04T17:12:06.899081347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:06.900379 containerd[1431]: time="2024-09-04T17:12:06.900334747Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"29940540\" in 3.3200606s" Sep 4 17:12:06.900638 containerd[1431]: time="2024-09-04T17:12:06.900483427Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\"" Sep 4 17:12:06.923332 containerd[1431]: time="2024-09-04T17:12:06.923293747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:12:07.477574 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:12:07.491442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:07.588164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:07.592752 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:12:07.636810 kubelet[1844]: E0904 17:12:07.636753 1844 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:12:07.639930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:12:07.640082 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:12:09.144543 containerd[1431]: time="2024-09-04T17:12:09.144481667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:09.145388 containerd[1431]: time="2024-09-04T17:12:09.145315387Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=26881134" Sep 4 17:12:09.146387 containerd[1431]: time="2024-09-04T17:12:09.146356187Z" level=info msg="ImageCreate event name:\"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:09.157730 containerd[1431]: time="2024-09-04T17:12:09.157663587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:09.161227 containerd[1431]: time="2024-09-04T17:12:09.159759227Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"28368399\" in 2.23641516s" Sep 4 17:12:09.161227 containerd[1431]: time="2024-09-04T17:12:09.159807267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\"" Sep 4 17:12:09.182686 containerd[1431]: time="2024-09-04T17:12:09.182644427Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:12:10.570515 containerd[1431]: time="2024-09-04T17:12:10.570461707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:10.571627 containerd[1431]: time="2024-09-04T17:12:10.571235707Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=16154065" Sep 4 17:12:10.572315 containerd[1431]: time="2024-09-04T17:12:10.572283867Z" level=info msg="ImageCreate event name:\"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:10.575743 containerd[1431]: time="2024-09-04T17:12:10.575697707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:10.577112 containerd[1431]: time="2024-09-04T17:12:10.576901427Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"17641348\" in 1.3942122s" Sep 4 17:12:10.577112 containerd[1431]: time="2024-09-04T17:12:10.576935747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\"" Sep 4 17:12:10.597008 containerd[1431]: time="2024-09-04T17:12:10.596971107Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:12:11.621719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3074572959.mount: Deactivated successfully. Sep 4 17:12:11.982406 containerd[1431]: time="2024-09-04T17:12:11.982174147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:11.983946 containerd[1431]: time="2024-09-04T17:12:11.983899827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=25646049" Sep 4 17:12:11.985168 containerd[1431]: time="2024-09-04T17:12:11.985132147Z" level=info msg="ImageCreate event name:\"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:11.987584 containerd[1431]: time="2024-09-04T17:12:11.987401707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:11.988254 containerd[1431]: time="2024-09-04T17:12:11.988023227Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"25645066\" in 1.39100856s" Sep 4 17:12:11.988254 containerd[1431]: time="2024-09-04T17:12:11.988060267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\"" Sep 4 17:12:12.007396 containerd[1431]: time="2024-09-04T17:12:12.007319867Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:12:12.596670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1218817734.mount: Deactivated successfully. Sep 4 17:12:13.582251 containerd[1431]: time="2024-09-04T17:12:13.582191787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:13.583097 containerd[1431]: time="2024-09-04T17:12:13.583048987Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Sep 4 17:12:13.585246 containerd[1431]: time="2024-09-04T17:12:13.584793707Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:13.589316 containerd[1431]: time="2024-09-04T17:12:13.589271667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:13.591131 containerd[1431]: time="2024-09-04T17:12:13.591087627Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.58369596s" Sep 4 17:12:13.591171 containerd[1431]: time="2024-09-04T17:12:13.591129907Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:12:13.613687 containerd[1431]: time="2024-09-04T17:12:13.613647027Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:12:14.251553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322419036.mount: Deactivated successfully. Sep 4 17:12:14.259884 containerd[1431]: time="2024-09-04T17:12:14.259831267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:14.260426 containerd[1431]: time="2024-09-04T17:12:14.260380987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:12:14.261324 containerd[1431]: time="2024-09-04T17:12:14.261286947Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:14.263554 containerd[1431]: time="2024-09-04T17:12:14.263517267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:14.264583 containerd[1431]: time="2024-09-04T17:12:14.264545347Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 650.86328ms" Sep 4 17:12:14.264626 containerd[1431]: time="2024-09-04T17:12:14.264579987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:12:14.283673 containerd[1431]: time="2024-09-04T17:12:14.283632467Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:12:14.964080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440088408.mount: Deactivated successfully. Sep 4 17:12:17.694245 containerd[1431]: time="2024-09-04T17:12:17.694153587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:17.695379 containerd[1431]: time="2024-09-04T17:12:17.695341227Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Sep 4 17:12:17.696103 containerd[1431]: time="2024-09-04T17:12:17.695906107Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:17.699561 containerd[1431]: time="2024-09-04T17:12:17.699508667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:17.700884 containerd[1431]: time="2024-09-04T17:12:17.700755347Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.41707912s" Sep 4 17:12:17.700884 containerd[1431]: time="2024-09-04T17:12:17.700793147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Sep 4 17:12:17.732532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:12:17.743488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:17.844119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:17.848548 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:12:17.885988 kubelet[2011]: E0904 17:12:17.885933 2011 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:12:17.888397 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:12:17.888558 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:12:22.691461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:22.700564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:22.728877 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-7.scope)... Sep 4 17:12:22.729655 systemd[1]: Reloading... Sep 4 17:12:22.816236 zram_generator::config[2127]: No configuration found. Sep 4 17:12:22.933057 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:12:22.988281 systemd[1]: Reloading finished in 257 ms. Sep 4 17:12:23.037479 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:12:23.037546 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:12:23.037773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:23.042931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:23.144672 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:23.151136 (kubelet)[2173]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:12:23.193936 kubelet[2173]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:12:23.193936 kubelet[2173]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:12:23.193936 kubelet[2173]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:12:23.196228 kubelet[2173]: I0904 17:12:23.194744 2173 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:12:23.986802 kubelet[2173]: I0904 17:12:23.986744 2173 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:12:23.986802 kubelet[2173]: I0904 17:12:23.986781 2173 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:12:23.987016 kubelet[2173]: I0904 17:12:23.986988 2173 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:12:24.045381 kubelet[2173]: I0904 17:12:24.045304 2173 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:12:24.045511 kubelet[2173]: E0904 17:12:24.045478 2173 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.056174 kubelet[2173]: I0904 17:12:24.054521 2173 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:12:24.056174 kubelet[2173]: I0904 17:12:24.055188 2173 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:12:24.056174 kubelet[2173]: I0904 17:12:24.055238 2173 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:12:24.056174 kubelet[2173]: I0904 17:12:24.055537 2173 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:12:24.056448 kubelet[2173]: I0904 17:12:24.055547 2173 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:12:24.056448 kubelet[2173]: I0904 17:12:24.055804 2173 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:12:24.057267 kubelet[2173]: I0904 17:12:24.057247 2173 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:12:24.057343 kubelet[2173]: I0904 17:12:24.057334 2173 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:12:24.057803 kubelet[2173]: W0904 17:12:24.057752 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.057844 kubelet[2173]: E0904 17:12:24.057826 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.057896 kubelet[2173]: I0904 17:12:24.057793 2173 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:12:24.058059 kubelet[2173]: I0904 17:12:24.058048 2173 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:12:24.065704 kubelet[2173]: W0904 17:12:24.063345 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.065704 kubelet[2173]: E0904 17:12:24.063409 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.067761 kubelet[2173]: I0904 17:12:24.067721 2173 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:12:24.068761 kubelet[2173]: I0904 17:12:24.068734 2173 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:12:24.069070 kubelet[2173]: W0904 17:12:24.069047 2173 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:12:24.070723 kubelet[2173]: I0904 17:12:24.070669 2173 server.go:1264] "Started kubelet" Sep 4 17:12:24.074463 kubelet[2173]: I0904 17:12:24.073350 2173 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:12:24.074463 kubelet[2173]: I0904 17:12:24.073685 2173 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:12:24.074463 kubelet[2173]: I0904 17:12:24.073722 2173 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:12:24.074463 kubelet[2173]: I0904 17:12:24.073969 2173 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:12:24.075036 kubelet[2173]: I0904 17:12:24.075011 2173 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:12:24.080502 kubelet[2173]: E0904 17:12:24.077947 2173 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 4 17:12:24.080502 kubelet[2173]: W0904 17:12:24.078357 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.080502 kubelet[2173]: E0904 17:12:24.078413 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.080502 kubelet[2173]: I0904 17:12:24.078413 2173 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:12:24.080502 kubelet[2173]: I0904 17:12:24.078540 2173 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:12:24.080502 kubelet[2173]: E0904 17:12:24.078873 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Sep 4 17:12:24.080502 kubelet[2173]: I0904 17:12:24.080002 2173 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:12:24.081067 kubelet[2173]: I0904 17:12:24.080937 2173 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:12:24.081122 kubelet[2173]: I0904 17:12:24.081079 2173 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:12:24.081328 kubelet[2173]: E0904 17:12:24.081304 2173 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:12:24.082424 kubelet[2173]: E0904 17:12:24.081995 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f219c5aef17c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:12:24.070634587 +0000 UTC m=+0.916067241,LastTimestamp:2024-09-04 17:12:24.070634587 +0000 UTC m=+0.916067241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:12:24.082696 kubelet[2173]: I0904 17:12:24.082603 2173 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:12:24.094334 kubelet[2173]: I0904 17:12:24.094277 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:12:24.095766 kubelet[2173]: I0904 17:12:24.095707 2173 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:12:24.096042 kubelet[2173]: I0904 17:12:24.096018 2173 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:12:24.096042 kubelet[2173]: I0904 17:12:24.096041 2173 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:12:24.096127 kubelet[2173]: E0904 17:12:24.096105 2173 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:12:24.097325 kubelet[2173]: W0904 17:12:24.096938 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.097325 kubelet[2173]: E0904 17:12:24.096999 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:24.105267 kubelet[2173]: I0904 17:12:24.105234 2173 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:12:24.105267 kubelet[2173]: I0904 17:12:24.105259 2173 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:12:24.105430 kubelet[2173]: I0904 17:12:24.105281 2173 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:12:24.179684 kubelet[2173]: I0904 17:12:24.179657 2173 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:12:24.180102 kubelet[2173]: E0904 17:12:24.180072 2173 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 4 17:12:24.181477 kubelet[2173]: I0904 17:12:24.181447 2173 policy_none.go:49] "None policy: Start" Sep 4 17:12:24.182139 kubelet[2173]: I0904 17:12:24.182113 2173 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:12:24.182237 kubelet[2173]: I0904 17:12:24.182147 2173 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:12:24.189074 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:12:24.197051 kubelet[2173]: E0904 17:12:24.197002 2173 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:12:24.206877 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:12:24.214325 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:12:24.226526 kubelet[2173]: I0904 17:12:24.226333 2173 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:12:24.226651 kubelet[2173]: I0904 17:12:24.226543 2173 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:12:24.226677 kubelet[2173]: I0904 17:12:24.226670 2173 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:12:24.229281 kubelet[2173]: E0904 17:12:24.229251 2173 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:12:24.279492 kubelet[2173]: E0904 17:12:24.279324 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Sep 4 17:12:24.382138 kubelet[2173]: I0904 17:12:24.382091 2173 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:12:24.382508 kubelet[2173]: E0904 17:12:24.382483 2173 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 4 17:12:24.397612 kubelet[2173]: I0904 17:12:24.397534 2173 topology_manager.go:215] "Topology Admit Handler" podUID="c8d6abea3628ea6ab6a98ab1bf766934" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:12:24.399794 kubelet[2173]: I0904 17:12:24.398808 2173 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:12:24.400407 kubelet[2173]: I0904 17:12:24.400359 2173 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:12:24.406952 systemd[1]: Created slice kubepods-burstable-podc8d6abea3628ea6ab6a98ab1bf766934.slice - libcontainer container kubepods-burstable-podc8d6abea3628ea6ab6a98ab1bf766934.slice. Sep 4 17:12:24.423034 systemd[1]: Created slice kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice - libcontainer container kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice. Sep 4 17:12:24.430773 systemd[1]: Created slice kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice - libcontainer container kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice. Sep 4 17:12:24.481642 kubelet[2173]: I0904 17:12:24.481587 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:24.481642 kubelet[2173]: I0904 17:12:24.481634 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:12:24.481809 kubelet[2173]: I0904 17:12:24.481657 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8d6abea3628ea6ab6a98ab1bf766934-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8d6abea3628ea6ab6a98ab1bf766934\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:12:24.481809 kubelet[2173]: I0904 17:12:24.481673 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8d6abea3628ea6ab6a98ab1bf766934-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8d6abea3628ea6ab6a98ab1bf766934\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:12:24.481809 kubelet[2173]: I0904 17:12:24.481719 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8d6abea3628ea6ab6a98ab1bf766934-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8d6abea3628ea6ab6a98ab1bf766934\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:12:24.481809 kubelet[2173]: I0904 17:12:24.481747 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:24.481809 kubelet[2173]: I0904 17:12:24.481762 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:24.481911 kubelet[2173]: I0904 17:12:24.481777 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:24.481911 kubelet[2173]: I0904 17:12:24.481793 2173 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:24.680011 kubelet[2173]: E0904 17:12:24.679889 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Sep 4 17:12:24.721388 kubelet[2173]: E0904 17:12:24.721342 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:24.722024 containerd[1431]: time="2024-09-04T17:12:24.721983307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8d6abea3628ea6ab6a98ab1bf766934,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:24.728654 kubelet[2173]: E0904 17:12:24.728384 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:24.728858 containerd[1431]: time="2024-09-04T17:12:24.728808507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:24.734194 kubelet[2173]: E0904 17:12:24.734171 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:24.734767 containerd[1431]: time="2024-09-04T17:12:24.734731827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:24.785298 kubelet[2173]: I0904 17:12:24.785259 2173 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:12:24.785613 kubelet[2173]: E0904 17:12:24.785572 2173 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 4 17:12:25.184325 kubelet[2173]: W0904 17:12:25.184241 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.184325 kubelet[2173]: E0904 17:12:25.184322 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.345716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320368074.mount: Deactivated successfully. Sep 4 17:12:25.356354 containerd[1431]: time="2024-09-04T17:12:25.356292987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:12:25.357516 containerd[1431]: time="2024-09-04T17:12:25.357477187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:12:25.359130 containerd[1431]: time="2024-09-04T17:12:25.359089267Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:12:25.360175 containerd[1431]: time="2024-09-04T17:12:25.360115507Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:12:25.361136 containerd[1431]: time="2024-09-04T17:12:25.361077427Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:12:25.361363 containerd[1431]: time="2024-09-04T17:12:25.361341147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:12:25.362580 containerd[1431]: time="2024-09-04T17:12:25.361990587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:12:25.363856 containerd[1431]: time="2024-09-04T17:12:25.363814147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:12:25.366254 containerd[1431]: time="2024-09-04T17:12:25.366219027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 637.33688ms" Sep 4 17:12:25.370441 containerd[1431]: time="2024-09-04T17:12:25.370384027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 635.46988ms" Sep 4 17:12:25.371232 containerd[1431]: time="2024-09-04T17:12:25.371168787Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 649.1038ms" Sep 4 17:12:25.423351 kubelet[2173]: W0904 17:12:25.423285 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.423788 kubelet[2173]: E0904 17:12:25.423759 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.481282 kubelet[2173]: E0904 17:12:25.480691 2173 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Sep 4 17:12:25.515435 kubelet[2173]: W0904 17:12:25.511754 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.515435 kubelet[2173]: E0904 17:12:25.511842 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.527993 containerd[1431]: time="2024-09-04T17:12:25.527547307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:25.527993 containerd[1431]: time="2024-09-04T17:12:25.527621907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:25.527993 containerd[1431]: time="2024-09-04T17:12:25.527633627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:25.528244 containerd[1431]: time="2024-09-04T17:12:25.527875867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:25.528244 containerd[1431]: time="2024-09-04T17:12:25.527931867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:25.528344 containerd[1431]: time="2024-09-04T17:12:25.527801787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:25.529480 containerd[1431]: time="2024-09-04T17:12:25.527976547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:25.529746 containerd[1431]: time="2024-09-04T17:12:25.529705667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:25.531998 containerd[1431]: time="2024-09-04T17:12:25.531909587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:25.532794 containerd[1431]: time="2024-09-04T17:12:25.532574307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:25.532794 containerd[1431]: time="2024-09-04T17:12:25.532599427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:25.532794 containerd[1431]: time="2024-09-04T17:12:25.532705267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:25.552484 systemd[1]: Started cri-containerd-055f7731751885796bd6a0d6e81c1934ea921cbb34ef2b420b17541d3623caf2.scope - libcontainer container 055f7731751885796bd6a0d6e81c1934ea921cbb34ef2b420b17541d3623caf2. Sep 4 17:12:25.554400 systemd[1]: Started cri-containerd-9b68642d6034f0cdac6a38cbfcacf4b8dc1607cccf31162339b7b292fbd10c2e.scope - libcontainer container 9b68642d6034f0cdac6a38cbfcacf4b8dc1607cccf31162339b7b292fbd10c2e. Sep 4 17:12:25.558946 systemd[1]: Started cri-containerd-78e493b14c582ba2138bffbd351f7edfc81372a1e08bd9816065267592ae9a4e.scope - libcontainer container 78e493b14c582ba2138bffbd351f7edfc81372a1e08bd9816065267592ae9a4e. Sep 4 17:12:25.588575 kubelet[2173]: I0904 17:12:25.587840 2173 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:12:25.588900 kubelet[2173]: E0904 17:12:25.588847 2173 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Sep 4 17:12:25.619389 containerd[1431]: time="2024-09-04T17:12:25.619326787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b68642d6034f0cdac6a38cbfcacf4b8dc1607cccf31162339b7b292fbd10c2e\"" Sep 4 17:12:25.620440 containerd[1431]: time="2024-09-04T17:12:25.620168107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c8d6abea3628ea6ab6a98ab1bf766934,Namespace:kube-system,Attempt:0,} returns sandbox id \"78e493b14c582ba2138bffbd351f7edfc81372a1e08bd9816065267592ae9a4e\"" Sep 4 17:12:25.621391 kubelet[2173]: E0904 17:12:25.620952 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:25.621391 kubelet[2173]: E0904 17:12:25.620952 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:25.624476 containerd[1431]: time="2024-09-04T17:12:25.624431547Z" level=info msg="CreateContainer within sandbox \"78e493b14c582ba2138bffbd351f7edfc81372a1e08bd9816065267592ae9a4e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:12:25.624914 containerd[1431]: time="2024-09-04T17:12:25.624882427Z" level=info msg="CreateContainer within sandbox \"9b68642d6034f0cdac6a38cbfcacf4b8dc1607cccf31162339b7b292fbd10c2e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:12:25.629768 containerd[1431]: time="2024-09-04T17:12:25.629726027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"055f7731751885796bd6a0d6e81c1934ea921cbb34ef2b420b17541d3623caf2\"" Sep 4 17:12:25.630900 kubelet[2173]: E0904 17:12:25.630643 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:25.632507 kubelet[2173]: W0904 17:12:25.632467 2173 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.632599 kubelet[2173]: E0904 17:12:25.632525 2173 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Sep 4 17:12:25.635243 containerd[1431]: time="2024-09-04T17:12:25.635154587Z" level=info msg="CreateContainer within sandbox \"055f7731751885796bd6a0d6e81c1934ea921cbb34ef2b420b17541d3623caf2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:12:25.650084 kubelet[2173]: E0904 17:12:25.649928 2173 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f219c5aef17c5b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:12:24.070634587 +0000 UTC m=+0.916067241,LastTimestamp:2024-09-04 17:12:24.070634587 +0000 UTC m=+0.916067241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:12:25.654014 containerd[1431]: time="2024-09-04T17:12:25.653777987Z" level=info msg="CreateContainer within sandbox \"9b68642d6034f0cdac6a38cbfcacf4b8dc1607cccf31162339b7b292fbd10c2e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5089414f4a4c365ceeb34b44190de65fa6ca70d8acd0b16771cf729bd7b78d8c\"" Sep 4 17:12:25.654597 containerd[1431]: time="2024-09-04T17:12:25.654566067Z" level=info msg="StartContainer for \"5089414f4a4c365ceeb34b44190de65fa6ca70d8acd0b16771cf729bd7b78d8c\"" Sep 4 17:12:25.661271 containerd[1431]: time="2024-09-04T17:12:25.661194827Z" level=info msg="CreateContainer within sandbox \"78e493b14c582ba2138bffbd351f7edfc81372a1e08bd9816065267592ae9a4e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e2590848ed02ff433d91f3fd853a468e67f1a4c6c0c93c4ba76c356fb0d2946\"" Sep 4 17:12:25.662675 containerd[1431]: time="2024-09-04T17:12:25.662612427Z" level=info msg="StartContainer for \"1e2590848ed02ff433d91f3fd853a468e67f1a4c6c0c93c4ba76c356fb0d2946\"" Sep 4 17:12:25.684435 systemd[1]: Started cri-containerd-5089414f4a4c365ceeb34b44190de65fa6ca70d8acd0b16771cf729bd7b78d8c.scope - libcontainer container 5089414f4a4c365ceeb34b44190de65fa6ca70d8acd0b16771cf729bd7b78d8c. Sep 4 17:12:25.687352 systemd[1]: Started cri-containerd-1e2590848ed02ff433d91f3fd853a468e67f1a4c6c0c93c4ba76c356fb0d2946.scope - libcontainer container 1e2590848ed02ff433d91f3fd853a468e67f1a4c6c0c93c4ba76c356fb0d2946. Sep 4 17:12:25.714157 containerd[1431]: time="2024-09-04T17:12:25.714112507Z" level=info msg="CreateContainer within sandbox \"055f7731751885796bd6a0d6e81c1934ea921cbb34ef2b420b17541d3623caf2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6f9c11417d250da8cb6fc4a0832db02b7d2e85e8897b1d0b6a5e6eb3e8fde2f4\"" Sep 4 17:12:25.714884 containerd[1431]: time="2024-09-04T17:12:25.714835347Z" level=info msg="StartContainer for \"6f9c11417d250da8cb6fc4a0832db02b7d2e85e8897b1d0b6a5e6eb3e8fde2f4\"" Sep 4 17:12:25.750194 containerd[1431]: time="2024-09-04T17:12:25.748918067Z" level=info msg="StartContainer for \"5089414f4a4c365ceeb34b44190de65fa6ca70d8acd0b16771cf729bd7b78d8c\" returns successfully" Sep 4 17:12:25.750194 containerd[1431]: time="2024-09-04T17:12:25.749054187Z" level=info msg="StartContainer for \"1e2590848ed02ff433d91f3fd853a468e67f1a4c6c0c93c4ba76c356fb0d2946\" returns successfully" Sep 4 17:12:25.767465 systemd[1]: Started cri-containerd-6f9c11417d250da8cb6fc4a0832db02b7d2e85e8897b1d0b6a5e6eb3e8fde2f4.scope - libcontainer container 6f9c11417d250da8cb6fc4a0832db02b7d2e85e8897b1d0b6a5e6eb3e8fde2f4. Sep 4 17:12:25.833163 containerd[1431]: time="2024-09-04T17:12:25.832930867Z" level=info msg="StartContainer for \"6f9c11417d250da8cb6fc4a0832db02b7d2e85e8897b1d0b6a5e6eb3e8fde2f4\" returns successfully" Sep 4 17:12:26.109241 kubelet[2173]: E0904 17:12:26.108801 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:26.111361 kubelet[2173]: E0904 17:12:26.111075 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:26.113780 kubelet[2173]: E0904 17:12:26.113712 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:27.127839 kubelet[2173]: E0904 17:12:27.127785 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:27.190993 kubelet[2173]: I0904 17:12:27.190917 2173 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:12:27.303588 kubelet[2173]: E0904 17:12:27.303529 2173 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:12:27.393666 kubelet[2173]: I0904 17:12:27.392422 2173 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:12:28.061143 kubelet[2173]: I0904 17:12:28.060866 2173 apiserver.go:52] "Watching apiserver" Sep 4 17:12:28.079734 kubelet[2173]: I0904 17:12:28.079695 2173 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:12:28.134310 kubelet[2173]: E0904 17:12:28.134270 2173 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:12:28.134870 kubelet[2173]: E0904 17:12:28.134768 2173 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:29.625926 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Sep 4 17:12:29.625943 systemd[1]: Reloading... Sep 4 17:12:29.702241 zram_generator::config[2492]: No configuration found. Sep 4 17:12:29.846740 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:12:29.915312 systemd[1]: Reloading finished in 289 ms. Sep 4 17:12:29.955241 kubelet[2173]: I0904 17:12:29.955085 2173 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:12:29.955464 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:29.971850 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:12:29.972614 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:29.972681 systemd[1]: kubelet.service: Consumed 1.368s CPU time, 118.9M memory peak, 0B memory swap peak. Sep 4 17:12:29.982686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:30.103007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:30.115654 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:12:30.169670 kubelet[2534]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:12:30.169670 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:12:30.169670 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:12:30.169670 kubelet[2534]: I0904 17:12:30.169614 2534 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:12:30.174654 kubelet[2534]: I0904 17:12:30.174607 2534 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:12:30.174654 kubelet[2534]: I0904 17:12:30.174643 2534 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:12:30.174905 kubelet[2534]: I0904 17:12:30.174874 2534 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:12:30.176415 kubelet[2534]: I0904 17:12:30.176393 2534 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:12:30.177855 kubelet[2534]: I0904 17:12:30.177814 2534 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:12:30.183176 kubelet[2534]: I0904 17:12:30.183146 2534 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:12:30.183473 kubelet[2534]: I0904 17:12:30.183436 2534 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:12:30.183723 kubelet[2534]: I0904 17:12:30.183476 2534 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:12:30.183792 kubelet[2534]: I0904 17:12:30.183735 2534 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:12:30.183792 kubelet[2534]: I0904 17:12:30.183746 2534 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:12:30.183792 kubelet[2534]: I0904 17:12:30.183782 2534 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:12:30.183931 kubelet[2534]: I0904 17:12:30.183921 2534 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:12:30.183956 kubelet[2534]: I0904 17:12:30.183934 2534 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:12:30.183978 kubelet[2534]: I0904 17:12:30.183962 2534 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:12:30.183999 kubelet[2534]: I0904 17:12:30.183979 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:12:30.185958 kubelet[2534]: I0904 17:12:30.185914 2534 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:12:30.186187 kubelet[2534]: I0904 17:12:30.186155 2534 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:12:30.186890 kubelet[2534]: I0904 17:12:30.186784 2534 server.go:1264] "Started kubelet" Sep 4 17:12:30.191667 kubelet[2534]: I0904 17:12:30.189252 2534 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:12:30.191667 kubelet[2534]: I0904 17:12:30.189597 2534 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:12:30.191667 kubelet[2534]: I0904 17:12:30.189639 2534 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:12:30.191667 kubelet[2534]: I0904 17:12:30.189899 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:12:30.193628 kubelet[2534]: I0904 17:12:30.193593 2534 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:12:30.196930 kubelet[2534]: I0904 17:12:30.196884 2534 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:12:30.197571 kubelet[2534]: I0904 17:12:30.197472 2534 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:12:30.197655 kubelet[2534]: I0904 17:12:30.197637 2534 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:12:30.204533 kubelet[2534]: I0904 17:12:30.204468 2534 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:12:30.204793 kubelet[2534]: I0904 17:12:30.204769 2534 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:12:30.206765 kubelet[2534]: E0904 17:12:30.206729 2534 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:12:30.207018 kubelet[2534]: I0904 17:12:30.207002 2534 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:12:30.211767 kubelet[2534]: I0904 17:12:30.211096 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:12:30.212514 kubelet[2534]: I0904 17:12:30.212476 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:12:30.212568 kubelet[2534]: I0904 17:12:30.212527 2534 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:12:30.212568 kubelet[2534]: I0904 17:12:30.212548 2534 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:12:30.212631 kubelet[2534]: E0904 17:12:30.212594 2534 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:12:30.251878 kubelet[2534]: I0904 17:12:30.251839 2534 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:12:30.251878 kubelet[2534]: I0904 17:12:30.251864 2534 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:12:30.251878 kubelet[2534]: I0904 17:12:30.251888 2534 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:12:30.252258 kubelet[2534]: I0904 17:12:30.252052 2534 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:12:30.252258 kubelet[2534]: I0904 17:12:30.252065 2534 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:12:30.252258 kubelet[2534]: I0904 17:12:30.252083 2534 policy_none.go:49] "None policy: Start" Sep 4 17:12:30.253018 kubelet[2534]: I0904 17:12:30.252995 2534 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:12:30.253018 kubelet[2534]: I0904 17:12:30.253023 2534 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:12:30.253233 kubelet[2534]: I0904 17:12:30.253214 2534 state_mem.go:75] "Updated machine memory state" Sep 4 17:12:30.257907 kubelet[2534]: I0904 17:12:30.257869 2534 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:12:30.258114 kubelet[2534]: I0904 17:12:30.258062 2534 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:12:30.258249 kubelet[2534]: I0904 17:12:30.258222 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:12:30.301288 kubelet[2534]: I0904 17:12:30.301235 2534 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:12:30.310402 kubelet[2534]: I0904 17:12:30.310289 2534 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:12:30.310402 kubelet[2534]: I0904 17:12:30.310405 2534 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:12:30.313404 kubelet[2534]: I0904 17:12:30.312763 2534 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:12:30.313404 kubelet[2534]: I0904 17:12:30.312901 2534 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:12:30.313404 kubelet[2534]: I0904 17:12:30.312938 2534 topology_manager.go:215] "Topology Admit Handler" podUID="c8d6abea3628ea6ab6a98ab1bf766934" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:12:30.398683 kubelet[2534]: I0904 17:12:30.398647 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8d6abea3628ea6ab6a98ab1bf766934-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8d6abea3628ea6ab6a98ab1bf766934\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:12:30.398942 kubelet[2534]: I0904 17:12:30.398835 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8d6abea3628ea6ab6a98ab1bf766934-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c8d6abea3628ea6ab6a98ab1bf766934\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:12:30.398942 kubelet[2534]: I0904 17:12:30.398864 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:30.398942 kubelet[2534]: I0904 17:12:30.398886 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:30.398942 kubelet[2534]: I0904 17:12:30.398900 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:30.398942 kubelet[2534]: I0904 17:12:30.398917 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:30.399310 kubelet[2534]: I0904 17:12:30.398959 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:12:30.399310 kubelet[2534]: I0904 17:12:30.399005 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:12:30.399310 kubelet[2534]: I0904 17:12:30.399036 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8d6abea3628ea6ab6a98ab1bf766934-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c8d6abea3628ea6ab6a98ab1bf766934\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:12:30.622224 kubelet[2534]: E0904 17:12:30.621766 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:30.622981 kubelet[2534]: E0904 17:12:30.622795 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:30.622981 kubelet[2534]: E0904 17:12:30.622892 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:31.184934 kubelet[2534]: I0904 17:12:31.184597 2534 apiserver.go:52] "Watching apiserver" Sep 4 17:12:31.197837 kubelet[2534]: I0904 17:12:31.197756 2534 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:12:31.237832 kubelet[2534]: E0904 17:12:31.236323 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:31.237832 kubelet[2534]: E0904 17:12:31.237137 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:31.237832 kubelet[2534]: E0904 17:12:31.237619 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:31.264443 kubelet[2534]: I0904 17:12:31.264324 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.263896513 podStartE2EDuration="1.263896513s" podCreationTimestamp="2024-09-04 17:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:31.263486911 +0000 UTC m=+1.144149936" watchObservedRunningTime="2024-09-04 17:12:31.263896513 +0000 UTC m=+1.144559498" Sep 4 17:12:31.280939 kubelet[2534]: I0904 17:12:31.280444 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.2804257749999999 podStartE2EDuration="1.280425775s" podCreationTimestamp="2024-09-04 17:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:31.273262131 +0000 UTC m=+1.153925156" watchObservedRunningTime="2024-09-04 17:12:31.280425775 +0000 UTC m=+1.161088800" Sep 4 17:12:32.237926 kubelet[2534]: E0904 17:12:32.237517 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:33.805220 kubelet[2534]: E0904 17:12:33.805169 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:34.797909 kubelet[2534]: E0904 17:12:34.797566 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:34.959618 sudo[1611]: pam_unix(sudo:session): session closed for user root Sep 4 17:12:34.961217 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:34.965020 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:56964.service: Deactivated successfully. Sep 4 17:12:34.966824 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:12:34.966996 systemd[1]: session-7.scope: Consumed 7.038s CPU time, 141.9M memory peak, 0B memory swap peak. Sep 4 17:12:34.967461 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:12:34.968304 systemd-logind[1417]: Removed session 7. Sep 4 17:12:38.368952 kubelet[2534]: E0904 17:12:38.368916 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:38.386455 kubelet[2534]: I0904 17:12:38.386401 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.386384533 podStartE2EDuration="8.386384533s" podCreationTimestamp="2024-09-04 17:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:31.280772777 +0000 UTC m=+1.161435842" watchObservedRunningTime="2024-09-04 17:12:38.386384533 +0000 UTC m=+8.267047558" Sep 4 17:12:39.248681 kubelet[2534]: E0904 17:12:39.248650 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:40.249704 kubelet[2534]: E0904 17:12:40.249674 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:40.577838 update_engine[1423]: I0904 17:12:40.577787 1423 update_attempter.cc:509] Updating boot flags... Sep 4 17:12:40.602581 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2633) Sep 4 17:12:40.635260 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2632) Sep 4 17:12:40.677253 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2632) Sep 4 17:12:43.812195 kubelet[2534]: E0904 17:12:43.812161 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:44.804958 kubelet[2534]: E0904 17:12:44.804919 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:45.195473 kubelet[2534]: I0904 17:12:45.195310 2534 topology_manager.go:215] "Topology Admit Handler" podUID="fa8bca63-b116-4b17-8088-5431545e4e21" podNamespace="kube-system" podName="kube-proxy-nr2pz" Sep 4 17:12:45.204769 systemd[1]: Created slice kubepods-besteffort-podfa8bca63_b116_4b17_8088_5431545e4e21.slice - libcontainer container kubepods-besteffort-podfa8bca63_b116_4b17_8088_5431545e4e21.slice. Sep 4 17:12:45.209147 kubelet[2534]: I0904 17:12:45.209115 2534 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:12:45.213707 containerd[1431]: time="2024-09-04T17:12:45.213228648Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:12:45.214641 kubelet[2534]: I0904 17:12:45.214403 2534 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:12:45.299147 kubelet[2534]: I0904 17:12:45.299107 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa8bca63-b116-4b17-8088-5431545e4e21-kube-proxy\") pod \"kube-proxy-nr2pz\" (UID: \"fa8bca63-b116-4b17-8088-5431545e4e21\") " pod="kube-system/kube-proxy-nr2pz" Sep 4 17:12:45.299147 kubelet[2534]: I0904 17:12:45.299146 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa8bca63-b116-4b17-8088-5431545e4e21-lib-modules\") pod \"kube-proxy-nr2pz\" (UID: \"fa8bca63-b116-4b17-8088-5431545e4e21\") " pod="kube-system/kube-proxy-nr2pz" Sep 4 17:12:45.299466 kubelet[2534]: I0904 17:12:45.299169 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa8bca63-b116-4b17-8088-5431545e4e21-xtables-lock\") pod \"kube-proxy-nr2pz\" (UID: \"fa8bca63-b116-4b17-8088-5431545e4e21\") " pod="kube-system/kube-proxy-nr2pz" Sep 4 17:12:45.299466 kubelet[2534]: I0904 17:12:45.299189 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4dhf\" (UniqueName: \"kubernetes.io/projected/fa8bca63-b116-4b17-8088-5431545e4e21-kube-api-access-r4dhf\") pod \"kube-proxy-nr2pz\" (UID: \"fa8bca63-b116-4b17-8088-5431545e4e21\") " pod="kube-system/kube-proxy-nr2pz" Sep 4 17:12:45.418187 kubelet[2534]: E0904 17:12:45.418146 2534 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 4 17:12:45.418187 kubelet[2534]: E0904 17:12:45.418180 2534 projected.go:200] Error preparing data for projected volume kube-api-access-r4dhf for pod kube-system/kube-proxy-nr2pz: configmap "kube-root-ca.crt" not found Sep 4 17:12:45.418347 kubelet[2534]: E0904 17:12:45.418264 2534 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fa8bca63-b116-4b17-8088-5431545e4e21-kube-api-access-r4dhf podName:fa8bca63-b116-4b17-8088-5431545e4e21 nodeName:}" failed. No retries permitted until 2024-09-04 17:12:45.91824236 +0000 UTC m=+15.798905385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r4dhf" (UniqueName: "kubernetes.io/projected/fa8bca63-b116-4b17-8088-5431545e4e21-kube-api-access-r4dhf") pod "kube-proxy-nr2pz" (UID: "fa8bca63-b116-4b17-8088-5431545e4e21") : configmap "kube-root-ca.crt" not found Sep 4 17:12:45.784777 kubelet[2534]: I0904 17:12:45.784585 2534 topology_manager.go:215] "Topology Admit Handler" podUID="486c58cf-0374-47aa-966d-b76bbc7fc9e9" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-xmj6m" Sep 4 17:12:45.793379 systemd[1]: Created slice kubepods-besteffort-pod486c58cf_0374_47aa_966d_b76bbc7fc9e9.slice - libcontainer container kubepods-besteffort-pod486c58cf_0374_47aa_966d_b76bbc7fc9e9.slice. Sep 4 17:12:45.802708 kubelet[2534]: I0904 17:12:45.802666 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rg7c\" (UniqueName: \"kubernetes.io/projected/486c58cf-0374-47aa-966d-b76bbc7fc9e9-kube-api-access-5rg7c\") pod \"tigera-operator-77f994b5bb-xmj6m\" (UID: \"486c58cf-0374-47aa-966d-b76bbc7fc9e9\") " pod="tigera-operator/tigera-operator-77f994b5bb-xmj6m" Sep 4 17:12:45.802708 kubelet[2534]: I0904 17:12:45.802709 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/486c58cf-0374-47aa-966d-b76bbc7fc9e9-var-lib-calico\") pod \"tigera-operator-77f994b5bb-xmj6m\" (UID: \"486c58cf-0374-47aa-966d-b76bbc7fc9e9\") " pod="tigera-operator/tigera-operator-77f994b5bb-xmj6m" Sep 4 17:12:46.098610 containerd[1431]: time="2024-09-04T17:12:46.098458043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-xmj6m,Uid:486c58cf-0374-47aa-966d-b76bbc7fc9e9,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:12:46.116669 kubelet[2534]: E0904 17:12:46.115764 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:46.116793 containerd[1431]: time="2024-09-04T17:12:46.116609486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nr2pz,Uid:fa8bca63-b116-4b17-8088-5431545e4e21,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:46.134504 containerd[1431]: time="2024-09-04T17:12:46.134086287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:46.134504 containerd[1431]: time="2024-09-04T17:12:46.134185127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:46.134504 containerd[1431]: time="2024-09-04T17:12:46.134196207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:46.134504 containerd[1431]: time="2024-09-04T17:12:46.134286047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:46.140277 containerd[1431]: time="2024-09-04T17:12:46.139977500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:46.140277 containerd[1431]: time="2024-09-04T17:12:46.140027860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:46.140277 containerd[1431]: time="2024-09-04T17:12:46.140043300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:46.140277 containerd[1431]: time="2024-09-04T17:12:46.140127581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:46.164430 systemd[1]: Started cri-containerd-2c5e91b81929003109fcbd8e984a91cab23702314aa9cc8bc8b05137dbbad8ca.scope - libcontainer container 2c5e91b81929003109fcbd8e984a91cab23702314aa9cc8bc8b05137dbbad8ca. Sep 4 17:12:46.165698 systemd[1]: Started cri-containerd-9fc1f0ce287c4bc99ef0a5f48632ab2e270532ed3b9f6194cc710d530d980427.scope - libcontainer container 9fc1f0ce287c4bc99ef0a5f48632ab2e270532ed3b9f6194cc710d530d980427. Sep 4 17:12:46.190739 containerd[1431]: time="2024-09-04T17:12:46.190607339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nr2pz,Uid:fa8bca63-b116-4b17-8088-5431545e4e21,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fc1f0ce287c4bc99ef0a5f48632ab2e270532ed3b9f6194cc710d530d980427\"" Sep 4 17:12:46.193968 kubelet[2534]: E0904 17:12:46.193842 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:46.198517 containerd[1431]: time="2024-09-04T17:12:46.198378357Z" level=info msg="CreateContainer within sandbox \"9fc1f0ce287c4bc99ef0a5f48632ab2e270532ed3b9f6194cc710d530d980427\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:12:46.204279 containerd[1431]: time="2024-09-04T17:12:46.203980970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-xmj6m,Uid:486c58cf-0374-47aa-966d-b76bbc7fc9e9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c5e91b81929003109fcbd8e984a91cab23702314aa9cc8bc8b05137dbbad8ca\"" Sep 4 17:12:46.207838 containerd[1431]: time="2024-09-04T17:12:46.207788579Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:12:46.238831 containerd[1431]: time="2024-09-04T17:12:46.238765492Z" level=info msg="CreateContainer within sandbox \"9fc1f0ce287c4bc99ef0a5f48632ab2e270532ed3b9f6194cc710d530d980427\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4a1eeeb79f6dfee81148cc6bf624fe517439042d65c61716222219cb5f8cc098\"" Sep 4 17:12:46.241318 containerd[1431]: time="2024-09-04T17:12:46.241285417Z" level=info msg="StartContainer for \"4a1eeeb79f6dfee81148cc6bf624fe517439042d65c61716222219cb5f8cc098\"" Sep 4 17:12:46.272447 systemd[1]: Started cri-containerd-4a1eeeb79f6dfee81148cc6bf624fe517439042d65c61716222219cb5f8cc098.scope - libcontainer container 4a1eeeb79f6dfee81148cc6bf624fe517439042d65c61716222219cb5f8cc098. Sep 4 17:12:46.304023 containerd[1431]: time="2024-09-04T17:12:46.303978524Z" level=info msg="StartContainer for \"4a1eeeb79f6dfee81148cc6bf624fe517439042d65c61716222219cb5f8cc098\" returns successfully" Sep 4 17:12:47.106664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount531721583.mount: Deactivated successfully. Sep 4 17:12:47.275363 kubelet[2534]: E0904 17:12:47.275114 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:47.294621 kubelet[2534]: I0904 17:12:47.294563 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nr2pz" podStartSLOduration=2.293786239 podStartE2EDuration="2.293786239s" podCreationTimestamp="2024-09-04 17:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:47.293492558 +0000 UTC m=+17.174155583" watchObservedRunningTime="2024-09-04 17:12:47.293786239 +0000 UTC m=+17.174449304" Sep 4 17:12:47.435882 containerd[1431]: time="2024-09-04T17:12:47.435493110Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:47.436323 containerd[1431]: time="2024-09-04T17:12:47.436149391Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485875" Sep 4 17:12:47.437816 containerd[1431]: time="2024-09-04T17:12:47.437681154Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:47.441818 containerd[1431]: time="2024-09-04T17:12:47.441781323Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:47.443043 containerd[1431]: time="2024-09-04T17:12:47.442469805Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.234637666s" Sep 4 17:12:47.443043 containerd[1431]: time="2024-09-04T17:12:47.442504445Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:12:47.450455 containerd[1431]: time="2024-09-04T17:12:47.450425662Z" level=info msg="CreateContainer within sandbox \"2c5e91b81929003109fcbd8e984a91cab23702314aa9cc8bc8b05137dbbad8ca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:12:47.460468 containerd[1431]: time="2024-09-04T17:12:47.460340924Z" level=info msg="CreateContainer within sandbox \"2c5e91b81929003109fcbd8e984a91cab23702314aa9cc8bc8b05137dbbad8ca\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2d3fd765489171cb5c1f7e5a4b7d81828007003a4d218000e54af3b24ff612a1\"" Sep 4 17:12:47.460876 containerd[1431]: time="2024-09-04T17:12:47.460816005Z" level=info msg="StartContainer for \"2d3fd765489171cb5c1f7e5a4b7d81828007003a4d218000e54af3b24ff612a1\"" Sep 4 17:12:47.490372 systemd[1]: Started cri-containerd-2d3fd765489171cb5c1f7e5a4b7d81828007003a4d218000e54af3b24ff612a1.scope - libcontainer container 2d3fd765489171cb5c1f7e5a4b7d81828007003a4d218000e54af3b24ff612a1. Sep 4 17:12:47.537276 containerd[1431]: time="2024-09-04T17:12:47.537116293Z" level=info msg="StartContainer for \"2d3fd765489171cb5c1f7e5a4b7d81828007003a4d218000e54af3b24ff612a1\" returns successfully" Sep 4 17:12:48.282797 kubelet[2534]: E0904 17:12:48.282672 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:48.303926 kubelet[2534]: I0904 17:12:48.303757 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-xmj6m" podStartSLOduration=2.061839412 podStartE2EDuration="3.303739494s" podCreationTimestamp="2024-09-04 17:12:45 +0000 UTC" firstStartedPulling="2024-09-04 17:12:46.205690334 +0000 UTC m=+16.086353359" lastFinishedPulling="2024-09-04 17:12:47.447590416 +0000 UTC m=+17.328253441" observedRunningTime="2024-09-04 17:12:48.303693894 +0000 UTC m=+18.184356919" watchObservedRunningTime="2024-09-04 17:12:48.303739494 +0000 UTC m=+18.184402519" Sep 4 17:12:50.910779 kubelet[2534]: I0904 17:12:50.910297 2534 topology_manager.go:215] "Topology Admit Handler" podUID="fefab1b6-d8ef-48bc-9201-4d9034f1dc10" podNamespace="calico-system" podName="calico-typha-8668b445cc-9tgfd" Sep 4 17:12:50.934544 systemd[1]: Created slice kubepods-besteffort-podfefab1b6_d8ef_48bc_9201_4d9034f1dc10.slice - libcontainer container kubepods-besteffort-podfefab1b6_d8ef_48bc_9201_4d9034f1dc10.slice. Sep 4 17:12:50.971801 kubelet[2534]: I0904 17:12:50.969361 2534 topology_manager.go:215] "Topology Admit Handler" podUID="00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9" podNamespace="calico-system" podName="calico-node-s8m2c" Sep 4 17:12:50.983044 systemd[1]: Created slice kubepods-besteffort-pod00c5ccc5_a0a9_40c3_ad00_4cfb45e5e9a9.slice - libcontainer container kubepods-besteffort-pod00c5ccc5_a0a9_40c3_ad00_4cfb45e5e9a9.slice. Sep 4 17:12:51.039424 kubelet[2534]: I0904 17:12:51.039371 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8dw4\" (UniqueName: \"kubernetes.io/projected/fefab1b6-d8ef-48bc-9201-4d9034f1dc10-kube-api-access-z8dw4\") pod \"calico-typha-8668b445cc-9tgfd\" (UID: \"fefab1b6-d8ef-48bc-9201-4d9034f1dc10\") " pod="calico-system/calico-typha-8668b445cc-9tgfd" Sep 4 17:12:51.039424 kubelet[2534]: I0904 17:12:51.039437 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-policysync\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.039596 kubelet[2534]: I0904 17:12:51.039464 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fefab1b6-d8ef-48bc-9201-4d9034f1dc10-tigera-ca-bundle\") pod \"calico-typha-8668b445cc-9tgfd\" (UID: \"fefab1b6-d8ef-48bc-9201-4d9034f1dc10\") " pod="calico-system/calico-typha-8668b445cc-9tgfd" Sep 4 17:12:51.039596 kubelet[2534]: I0904 17:12:51.039480 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-tigera-ca-bundle\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.039596 kubelet[2534]: I0904 17:12:51.039502 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-node-certs\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.039596 kubelet[2534]: I0904 17:12:51.039522 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fefab1b6-d8ef-48bc-9201-4d9034f1dc10-typha-certs\") pod \"calico-typha-8668b445cc-9tgfd\" (UID: \"fefab1b6-d8ef-48bc-9201-4d9034f1dc10\") " pod="calico-system/calico-typha-8668b445cc-9tgfd" Sep 4 17:12:51.039596 kubelet[2534]: I0904 17:12:51.039547 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-lib-modules\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.039729 kubelet[2534]: I0904 17:12:51.039563 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-xtables-lock\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.086257 kubelet[2534]: I0904 17:12:51.085064 2534 topology_manager.go:215] "Topology Admit Handler" podUID="dbe20c7a-4d25-4a1c-ab36-3d1bda88df08" podNamespace="calico-system" podName="csi-node-driver-sjnkk" Sep 4 17:12:51.086257 kubelet[2534]: E0904 17:12:51.085362 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjnkk" podUID="dbe20c7a-4d25-4a1c-ab36-3d1bda88df08" Sep 4 17:12:51.141846 kubelet[2534]: I0904 17:12:51.140888 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-var-lib-calico\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.143422 kubelet[2534]: I0904 17:12:51.142081 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-cni-log-dir\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.143422 kubelet[2534]: I0904 17:12:51.142125 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wtbz\" (UniqueName: \"kubernetes.io/projected/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-kube-api-access-9wtbz\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.143422 kubelet[2534]: I0904 17:12:51.142148 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-var-run-calico\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.143422 kubelet[2534]: I0904 17:12:51.142164 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dbe20c7a-4d25-4a1c-ab36-3d1bda88df08-varrun\") pod \"csi-node-driver-sjnkk\" (UID: \"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08\") " pod="calico-system/csi-node-driver-sjnkk" Sep 4 17:12:51.143422 kubelet[2534]: I0904 17:12:51.142185 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dbe20c7a-4d25-4a1c-ab36-3d1bda88df08-socket-dir\") pod \"csi-node-driver-sjnkk\" (UID: \"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08\") " pod="calico-system/csi-node-driver-sjnkk" Sep 4 17:12:51.143613 kubelet[2534]: I0904 17:12:51.142221 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkhv\" (UniqueName: \"kubernetes.io/projected/dbe20c7a-4d25-4a1c-ab36-3d1bda88df08-kube-api-access-2gkhv\") pod \"csi-node-driver-sjnkk\" (UID: \"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08\") " pod="calico-system/csi-node-driver-sjnkk" Sep 4 17:12:51.143613 kubelet[2534]: I0904 17:12:51.142274 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-cni-bin-dir\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.143613 kubelet[2534]: I0904 17:12:51.142289 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dbe20c7a-4d25-4a1c-ab36-3d1bda88df08-registration-dir\") pod \"csi-node-driver-sjnkk\" (UID: \"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08\") " pod="calico-system/csi-node-driver-sjnkk" Sep 4 17:12:51.143613 kubelet[2534]: I0904 17:12:51.142323 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-cni-net-dir\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.143613 kubelet[2534]: I0904 17:12:51.142365 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9-flexvol-driver-host\") pod \"calico-node-s8m2c\" (UID: \"00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9\") " pod="calico-system/calico-node-s8m2c" Sep 4 17:12:51.143716 kubelet[2534]: I0904 17:12:51.142383 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dbe20c7a-4d25-4a1c-ab36-3d1bda88df08-kubelet-dir\") pod \"csi-node-driver-sjnkk\" (UID: \"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08\") " pod="calico-system/csi-node-driver-sjnkk" Sep 4 17:12:51.240884 kubelet[2534]: E0904 17:12:51.240770 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:51.241577 containerd[1431]: time="2024-09-04T17:12:51.241431033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8668b445cc-9tgfd,Uid:fefab1b6-d8ef-48bc-9201-4d9034f1dc10,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:51.247816 kubelet[2534]: E0904 17:12:51.247301 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.247816 kubelet[2534]: W0904 17:12:51.247326 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.247816 kubelet[2534]: E0904 17:12:51.247385 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.250277 kubelet[2534]: E0904 17:12:51.250156 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.250277 kubelet[2534]: W0904 17:12:51.250189 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.250277 kubelet[2534]: E0904 17:12:51.250228 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.251196 kubelet[2534]: E0904 17:12:51.250739 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.251196 kubelet[2534]: W0904 17:12:51.250761 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.251196 kubelet[2534]: E0904 17:12:51.250780 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.251196 kubelet[2534]: E0904 17:12:51.251035 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.251196 kubelet[2534]: W0904 17:12:51.251045 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.251196 kubelet[2534]: E0904 17:12:51.251133 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.251403 kubelet[2534]: E0904 17:12:51.251289 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.251403 kubelet[2534]: W0904 17:12:51.251299 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.251403 kubelet[2534]: E0904 17:12:51.251355 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.252447 kubelet[2534]: E0904 17:12:51.251614 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.252447 kubelet[2534]: W0904 17:12:51.251631 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.252447 kubelet[2534]: E0904 17:12:51.251647 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.252447 kubelet[2534]: E0904 17:12:51.252124 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.252447 kubelet[2534]: W0904 17:12:51.252144 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.252447 kubelet[2534]: E0904 17:12:51.252169 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.252447 kubelet[2534]: E0904 17:12:51.252388 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.252447 kubelet[2534]: W0904 17:12:51.252398 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.252447 kubelet[2534]: E0904 17:12:51.252412 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.256248 kubelet[2534]: E0904 17:12:51.255092 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.256248 kubelet[2534]: W0904 17:12:51.255114 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.256248 kubelet[2534]: E0904 17:12:51.255169 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.256248 kubelet[2534]: E0904 17:12:51.255858 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.256248 kubelet[2534]: W0904 17:12:51.255955 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.256248 kubelet[2534]: E0904 17:12:51.256132 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.258083 kubelet[2534]: E0904 17:12:51.256605 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.258083 kubelet[2534]: W0904 17:12:51.256673 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.258083 kubelet[2534]: E0904 17:12:51.256758 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.258083 kubelet[2534]: E0904 17:12:51.257007 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.258083 kubelet[2534]: W0904 17:12:51.257019 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.258083 kubelet[2534]: E0904 17:12:51.257077 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.258083 kubelet[2534]: E0904 17:12:51.257321 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.258083 kubelet[2534]: W0904 17:12:51.257334 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.258083 kubelet[2534]: E0904 17:12:51.257432 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.258083 kubelet[2534]: E0904 17:12:51.257608 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.258411 kubelet[2534]: W0904 17:12:51.257618 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.258411 kubelet[2534]: E0904 17:12:51.257674 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.258411 kubelet[2534]: E0904 17:12:51.257756 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.258411 kubelet[2534]: W0904 17:12:51.257764 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.258411 kubelet[2534]: E0904 17:12:51.257782 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.258411 kubelet[2534]: E0904 17:12:51.257980 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.258411 kubelet[2534]: W0904 17:12:51.257990 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.258411 kubelet[2534]: E0904 17:12:51.258006 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.258411 kubelet[2534]: E0904 17:12:51.258401 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.258592 kubelet[2534]: W0904 17:12:51.258416 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.258592 kubelet[2534]: E0904 17:12:51.258434 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.259142 kubelet[2534]: E0904 17:12:51.258632 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.259142 kubelet[2534]: W0904 17:12:51.258646 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.259142 kubelet[2534]: E0904 17:12:51.258670 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.259142 kubelet[2534]: E0904 17:12:51.258878 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.259142 kubelet[2534]: W0904 17:12:51.258888 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.259142 kubelet[2534]: E0904 17:12:51.258908 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.259142 kubelet[2534]: E0904 17:12:51.259085 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.259142 kubelet[2534]: W0904 17:12:51.259094 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.259142 kubelet[2534]: E0904 17:12:51.259103 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.260226 kubelet[2534]: E0904 17:12:51.259506 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.260226 kubelet[2534]: W0904 17:12:51.259522 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.260226 kubelet[2534]: E0904 17:12:51.259536 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.263053 kubelet[2534]: E0904 17:12:51.263018 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.263053 kubelet[2534]: W0904 17:12:51.263046 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.263194 kubelet[2534]: E0904 17:12:51.263068 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.276647 kubelet[2534]: E0904 17:12:51.276510 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:51.276816 kubelet[2534]: W0904 17:12:51.276535 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:51.276816 kubelet[2534]: E0904 17:12:51.276723 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:51.287486 kubelet[2534]: E0904 17:12:51.287122 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:51.287715 containerd[1431]: time="2024-09-04T17:12:51.284736147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:51.287715 containerd[1431]: time="2024-09-04T17:12:51.285331108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:51.287715 containerd[1431]: time="2024-09-04T17:12:51.285352068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:51.287715 containerd[1431]: time="2024-09-04T17:12:51.285495348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:51.287832 containerd[1431]: time="2024-09-04T17:12:51.287709112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s8m2c,Uid:00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:51.308504 systemd[1]: Started cri-containerd-81bb45ba7fa7587ab55b59b9be5a97e759c77d00199abd8fab167a9fc09135d2.scope - libcontainer container 81bb45ba7fa7587ab55b59b9be5a97e759c77d00199abd8fab167a9fc09135d2. Sep 4 17:12:51.343753 containerd[1431]: time="2024-09-04T17:12:51.343640567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:51.343753 containerd[1431]: time="2024-09-04T17:12:51.343702407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:51.343753 containerd[1431]: time="2024-09-04T17:12:51.343718887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:51.344044 containerd[1431]: time="2024-09-04T17:12:51.343999887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:51.368876 systemd[1]: Started cri-containerd-339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6.scope - libcontainer container 339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6. Sep 4 17:12:51.370350 containerd[1431]: time="2024-09-04T17:12:51.370100652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8668b445cc-9tgfd,Uid:fefab1b6-d8ef-48bc-9201-4d9034f1dc10,Namespace:calico-system,Attempt:0,} returns sandbox id \"81bb45ba7fa7587ab55b59b9be5a97e759c77d00199abd8fab167a9fc09135d2\"" Sep 4 17:12:51.371903 kubelet[2534]: E0904 17:12:51.371311 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:51.372779 containerd[1431]: time="2024-09-04T17:12:51.372735176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:12:51.407580 containerd[1431]: time="2024-09-04T17:12:51.407453315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s8m2c,Uid:00c5ccc5-a0a9-40c3-ad00-4cfb45e5e9a9,Namespace:calico-system,Attempt:0,} returns sandbox id \"339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6\"" Sep 4 17:12:51.408363 kubelet[2534]: E0904 17:12:51.408308 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:52.796774 containerd[1431]: time="2024-09-04T17:12:52.796716586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:52.797256 containerd[1431]: time="2024-09-04T17:12:52.797072186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:12:52.798443 containerd[1431]: time="2024-09-04T17:12:52.798400469Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:52.800392 containerd[1431]: time="2024-09-04T17:12:52.800349512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:52.801837 containerd[1431]: time="2024-09-04T17:12:52.801666594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.428882418s" Sep 4 17:12:52.801837 containerd[1431]: time="2024-09-04T17:12:52.801708234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:12:52.803080 containerd[1431]: time="2024-09-04T17:12:52.803041076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:12:52.828580 containerd[1431]: time="2024-09-04T17:12:52.828532236Z" level=info msg="CreateContainer within sandbox \"81bb45ba7fa7587ab55b59b9be5a97e759c77d00199abd8fab167a9fc09135d2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:12:52.841006 containerd[1431]: time="2024-09-04T17:12:52.840947776Z" level=info msg="CreateContainer within sandbox \"81bb45ba7fa7587ab55b59b9be5a97e759c77d00199abd8fab167a9fc09135d2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"08faa2f4311e76b236bd0cc8c93d16de2f96d3847618b4f9aaf24efa49b6c68e\"" Sep 4 17:12:52.841655 containerd[1431]: time="2024-09-04T17:12:52.841527777Z" level=info msg="StartContainer for \"08faa2f4311e76b236bd0cc8c93d16de2f96d3847618b4f9aaf24efa49b6c68e\"" Sep 4 17:12:52.879477 systemd[1]: Started cri-containerd-08faa2f4311e76b236bd0cc8c93d16de2f96d3847618b4f9aaf24efa49b6c68e.scope - libcontainer container 08faa2f4311e76b236bd0cc8c93d16de2f96d3847618b4f9aaf24efa49b6c68e. Sep 4 17:12:52.929979 containerd[1431]: time="2024-09-04T17:12:52.929931598Z" level=info msg="StartContainer for \"08faa2f4311e76b236bd0cc8c93d16de2f96d3847618b4f9aaf24efa49b6c68e\" returns successfully" Sep 4 17:12:53.214252 kubelet[2534]: E0904 17:12:53.213510 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjnkk" podUID="dbe20c7a-4d25-4a1c-ab36-3d1bda88df08" Sep 4 17:12:53.301093 kubelet[2534]: E0904 17:12:53.301057 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:53.354837 kubelet[2534]: E0904 17:12:53.354791 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.354837 kubelet[2534]: W0904 17:12:53.354819 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.354837 kubelet[2534]: E0904 17:12:53.354844 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.355112 kubelet[2534]: E0904 17:12:53.355086 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.355112 kubelet[2534]: W0904 17:12:53.355101 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.355112 kubelet[2534]: E0904 17:12:53.355112 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.355329 kubelet[2534]: E0904 17:12:53.355315 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.355329 kubelet[2534]: W0904 17:12:53.355327 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.355397 kubelet[2534]: E0904 17:12:53.355344 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.355519 kubelet[2534]: E0904 17:12:53.355495 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.355519 kubelet[2534]: W0904 17:12:53.355509 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.355519 kubelet[2534]: E0904 17:12:53.355518 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.355685 kubelet[2534]: E0904 17:12:53.355674 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.355685 kubelet[2534]: W0904 17:12:53.355684 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.355729 kubelet[2534]: E0904 17:12:53.355695 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.355826 kubelet[2534]: E0904 17:12:53.355817 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.355852 kubelet[2534]: W0904 17:12:53.355826 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.355852 kubelet[2534]: E0904 17:12:53.355834 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.355967 kubelet[2534]: E0904 17:12:53.355957 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.355991 kubelet[2534]: W0904 17:12:53.355968 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.355991 kubelet[2534]: E0904 17:12:53.355975 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.356111 kubelet[2534]: E0904 17:12:53.356102 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.356135 kubelet[2534]: W0904 17:12:53.356111 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.356135 kubelet[2534]: E0904 17:12:53.356118 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.356279 kubelet[2534]: E0904 17:12:53.356268 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.356312 kubelet[2534]: W0904 17:12:53.356281 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.356312 kubelet[2534]: E0904 17:12:53.356290 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.356448 kubelet[2534]: E0904 17:12:53.356436 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.356448 kubelet[2534]: W0904 17:12:53.356446 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.356499 kubelet[2534]: E0904 17:12:53.356454 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.356587 kubelet[2534]: E0904 17:12:53.356577 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.356611 kubelet[2534]: W0904 17:12:53.356587 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.356611 kubelet[2534]: E0904 17:12:53.356595 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.356733 kubelet[2534]: E0904 17:12:53.356723 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.356758 kubelet[2534]: W0904 17:12:53.356733 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.356758 kubelet[2534]: E0904 17:12:53.356741 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.356886 kubelet[2534]: E0904 17:12:53.356876 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.356912 kubelet[2534]: W0904 17:12:53.356886 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.356912 kubelet[2534]: E0904 17:12:53.356895 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.357096 kubelet[2534]: E0904 17:12:53.357062 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.357096 kubelet[2534]: W0904 17:12:53.357073 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.357096 kubelet[2534]: E0904 17:12:53.357081 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.357233 kubelet[2534]: E0904 17:12:53.357223 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.357233 kubelet[2534]: W0904 17:12:53.357232 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.357287 kubelet[2534]: E0904 17:12:53.357241 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.366769 kubelet[2534]: E0904 17:12:53.366730 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.366769 kubelet[2534]: W0904 17:12:53.366753 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.366769 kubelet[2534]: E0904 17:12:53.366772 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.367002 kubelet[2534]: E0904 17:12:53.366980 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.367002 kubelet[2534]: W0904 17:12:53.366991 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.367002 kubelet[2534]: E0904 17:12:53.367001 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.367192 kubelet[2534]: E0904 17:12:53.367171 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.367192 kubelet[2534]: W0904 17:12:53.367182 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.367192 kubelet[2534]: E0904 17:12:53.367191 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.367543 kubelet[2534]: E0904 17:12:53.367523 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.367665 kubelet[2534]: W0904 17:12:53.367603 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.367665 kubelet[2534]: E0904 17:12:53.367632 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.367896 kubelet[2534]: E0904 17:12:53.367878 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.367896 kubelet[2534]: W0904 17:12:53.367896 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.367956 kubelet[2534]: E0904 17:12:53.367915 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.368112 kubelet[2534]: E0904 17:12:53.368099 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.368112 kubelet[2534]: W0904 17:12:53.368109 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.368175 kubelet[2534]: E0904 17:12:53.368125 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.368296 kubelet[2534]: E0904 17:12:53.368285 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.368296 kubelet[2534]: W0904 17:12:53.368295 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.368458 kubelet[2534]: E0904 17:12:53.368308 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.368827 kubelet[2534]: E0904 17:12:53.368491 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.368827 kubelet[2534]: W0904 17:12:53.368503 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.368827 kubelet[2534]: E0904 17:12:53.368529 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.368827 kubelet[2534]: E0904 17:12:53.368646 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.368827 kubelet[2534]: W0904 17:12:53.368653 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.368827 kubelet[2534]: E0904 17:12:53.368681 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.368827 kubelet[2534]: E0904 17:12:53.368818 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.368827 kubelet[2534]: W0904 17:12:53.368829 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.368827 kubelet[2534]: E0904 17:12:53.368844 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.369243 kubelet[2534]: E0904 17:12:53.369008 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.369243 kubelet[2534]: W0904 17:12:53.369015 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.369243 kubelet[2534]: E0904 17:12:53.369037 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.369243 kubelet[2534]: E0904 17:12:53.369222 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.369243 kubelet[2534]: W0904 17:12:53.369230 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.369243 kubelet[2534]: E0904 17:12:53.369239 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.369861 kubelet[2534]: E0904 17:12:53.369715 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.369861 kubelet[2534]: W0904 17:12:53.369734 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.369861 kubelet[2534]: E0904 17:12:53.369755 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.370472 kubelet[2534]: E0904 17:12:53.370007 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.370472 kubelet[2534]: W0904 17:12:53.370020 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.370472 kubelet[2534]: E0904 17:12:53.370054 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.370472 kubelet[2534]: E0904 17:12:53.370317 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.370472 kubelet[2534]: W0904 17:12:53.370330 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.370472 kubelet[2534]: E0904 17:12:53.370375 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.370744 kubelet[2534]: E0904 17:12:53.370682 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.370875 kubelet[2534]: W0904 17:12:53.370858 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.371095 kubelet[2534]: E0904 17:12:53.370939 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.371243 kubelet[2534]: E0904 17:12:53.371226 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.371957 kubelet[2534]: W0904 17:12:53.371288 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.371957 kubelet[2534]: E0904 17:12:53.371308 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.372179 kubelet[2534]: E0904 17:12:53.372158 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:53.372322 kubelet[2534]: W0904 17:12:53.372303 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:53.372407 kubelet[2534]: E0904 17:12:53.372393 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:53.817118 containerd[1431]: time="2024-09-04T17:12:53.817065367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:53.817573 containerd[1431]: time="2024-09-04T17:12:53.817542927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:12:53.819042 containerd[1431]: time="2024-09-04T17:12:53.818983929Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:53.821331 containerd[1431]: time="2024-09-04T17:12:53.821288813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:53.822795 containerd[1431]: time="2024-09-04T17:12:53.822758655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.019679179s" Sep 4 17:12:53.822830 containerd[1431]: time="2024-09-04T17:12:53.822796535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:12:53.824838 containerd[1431]: time="2024-09-04T17:12:53.824798338Z" level=info msg="CreateContainer within sandbox \"339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:12:53.838636 containerd[1431]: time="2024-09-04T17:12:53.838586559Z" level=info msg="CreateContainer within sandbox \"339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766\"" Sep 4 17:12:53.839309 containerd[1431]: time="2024-09-04T17:12:53.839285240Z" level=info msg="StartContainer for \"ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766\"" Sep 4 17:12:53.873418 systemd[1]: Started cri-containerd-ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766.scope - libcontainer container ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766. Sep 4 17:12:53.908613 containerd[1431]: time="2024-09-04T17:12:53.908547463Z" level=info msg="StartContainer for \"ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766\" returns successfully" Sep 4 17:12:53.928626 systemd[1]: cri-containerd-ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766.scope: Deactivated successfully. Sep 4 17:12:54.006632 containerd[1431]: time="2024-09-04T17:12:54.001533481Z" level=info msg="shim disconnected" id=ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766 namespace=k8s.io Sep 4 17:12:54.006900 containerd[1431]: time="2024-09-04T17:12:54.006634569Z" level=warning msg="cleaning up after shim disconnected" id=ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766 namespace=k8s.io Sep 4 17:12:54.006900 containerd[1431]: time="2024-09-04T17:12:54.006653769Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:54.165713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba1e6f0b25d9418a8222cde3028c31dabf593290897377f04c9649a6810ce766-rootfs.mount: Deactivated successfully. Sep 4 17:12:54.304012 kubelet[2534]: E0904 17:12:54.303742 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:54.305934 containerd[1431]: time="2024-09-04T17:12:54.305195466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:12:54.311177 kubelet[2534]: I0904 17:12:54.311134 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:12:54.312230 kubelet[2534]: E0904 17:12:54.312185 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:54.318754 kubelet[2534]: I0904 17:12:54.318694 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8668b445cc-9tgfd" podStartSLOduration=2.888411944 podStartE2EDuration="4.318676164s" podCreationTimestamp="2024-09-04 17:12:50 +0000 UTC" firstStartedPulling="2024-09-04 17:12:51.372280015 +0000 UTC m=+21.252943040" lastFinishedPulling="2024-09-04 17:12:52.802544235 +0000 UTC m=+22.683207260" observedRunningTime="2024-09-04 17:12:53.310913852 +0000 UTC m=+23.191576877" watchObservedRunningTime="2024-09-04 17:12:54.318676164 +0000 UTC m=+24.199339189" Sep 4 17:12:55.213429 kubelet[2534]: E0904 17:12:55.213375 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjnkk" podUID="dbe20c7a-4d25-4a1c-ab36-3d1bda88df08" Sep 4 17:12:57.213781 kubelet[2534]: E0904 17:12:57.213721 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sjnkk" podUID="dbe20c7a-4d25-4a1c-ab36-3d1bda88df08" Sep 4 17:12:57.290290 containerd[1431]: time="2024-09-04T17:12:57.290236348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:57.291366 containerd[1431]: time="2024-09-04T17:12:57.291108229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:12:57.291970 containerd[1431]: time="2024-09-04T17:12:57.291902910Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:57.294973 containerd[1431]: time="2024-09-04T17:12:57.294712193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:57.296134 containerd[1431]: time="2024-09-04T17:12:57.296097915Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.990849969s" Sep 4 17:12:57.296343 containerd[1431]: time="2024-09-04T17:12:57.296254475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:12:57.298557 containerd[1431]: time="2024-09-04T17:12:57.298517838Z" level=info msg="CreateContainer within sandbox \"339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:12:57.313419 containerd[1431]: time="2024-09-04T17:12:57.313358015Z" level=info msg="CreateContainer within sandbox \"339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7\"" Sep 4 17:12:57.314128 containerd[1431]: time="2024-09-04T17:12:57.313880775Z" level=info msg="StartContainer for \"82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7\"" Sep 4 17:12:57.338866 systemd[1]: run-containerd-runc-k8s.io-82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7-runc.Nt3LgC.mount: Deactivated successfully. Sep 4 17:12:57.348397 systemd[1]: Started cri-containerd-82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7.scope - libcontainer container 82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7. Sep 4 17:12:57.380858 containerd[1431]: time="2024-09-04T17:12:57.380813692Z" level=info msg="StartContainer for \"82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7\" returns successfully" Sep 4 17:12:57.883001 systemd[1]: cri-containerd-82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7.scope: Deactivated successfully. Sep 4 17:12:57.916136 containerd[1431]: time="2024-09-04T17:12:57.916030988Z" level=info msg="shim disconnected" id=82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7 namespace=k8s.io Sep 4 17:12:57.916136 containerd[1431]: time="2024-09-04T17:12:57.916086588Z" level=warning msg="cleaning up after shim disconnected" id=82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7 namespace=k8s.io Sep 4 17:12:57.916136 containerd[1431]: time="2024-09-04T17:12:57.916096228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:57.976527 kubelet[2534]: I0904 17:12:57.976487 2534 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:12:57.998145 kubelet[2534]: I0904 17:12:57.998096 2534 topology_manager.go:215] "Topology Admit Handler" podUID="ce4d0287-3bcd-46a6-ab21-8867f52fec21" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mhfkm" Sep 4 17:12:58.001884 kubelet[2534]: I0904 17:12:58.001611 2534 topology_manager.go:215] "Topology Admit Handler" podUID="c07d999d-3f10-4208-b55b-998487be89b5" podNamespace="calico-system" podName="calico-kube-controllers-58c64dfb6f-8tpfw" Sep 4 17:12:58.004302 kubelet[2534]: I0904 17:12:58.004261 2534 topology_manager.go:215] "Topology Admit Handler" podUID="f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vzc8x" Sep 4 17:12:58.016619 systemd[1]: Created slice kubepods-burstable-podce4d0287_3bcd_46a6_ab21_8867f52fec21.slice - libcontainer container kubepods-burstable-podce4d0287_3bcd_46a6_ab21_8867f52fec21.slice. Sep 4 17:12:58.021522 systemd[1]: Created slice kubepods-besteffort-podc07d999d_3f10_4208_b55b_998487be89b5.slice - libcontainer container kubepods-besteffort-podc07d999d_3f10_4208_b55b_998487be89b5.slice. Sep 4 17:12:58.028060 systemd[1]: Created slice kubepods-burstable-podf667e6cc_a435_4be3_9a6f_98b7f2fbb1a8.slice - libcontainer container kubepods-burstable-podf667e6cc_a435_4be3_9a6f_98b7f2fbb1a8.slice. Sep 4 17:12:58.102736 kubelet[2534]: I0904 17:12:58.102691 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce4d0287-3bcd-46a6-ab21-8867f52fec21-config-volume\") pod \"coredns-7db6d8ff4d-mhfkm\" (UID: \"ce4d0287-3bcd-46a6-ab21-8867f52fec21\") " pod="kube-system/coredns-7db6d8ff4d-mhfkm" Sep 4 17:12:58.102736 kubelet[2534]: I0904 17:12:58.102733 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2v7\" (UniqueName: \"kubernetes.io/projected/ce4d0287-3bcd-46a6-ab21-8867f52fec21-kube-api-access-nk2v7\") pod \"coredns-7db6d8ff4d-mhfkm\" (UID: \"ce4d0287-3bcd-46a6-ab21-8867f52fec21\") " pod="kube-system/coredns-7db6d8ff4d-mhfkm" Sep 4 17:12:58.102907 kubelet[2534]: I0904 17:12:58.102762 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c07d999d-3f10-4208-b55b-998487be89b5-tigera-ca-bundle\") pod \"calico-kube-controllers-58c64dfb6f-8tpfw\" (UID: \"c07d999d-3f10-4208-b55b-998487be89b5\") " pod="calico-system/calico-kube-controllers-58c64dfb6f-8tpfw" Sep 4 17:12:58.102907 kubelet[2534]: I0904 17:12:58.102873 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbkp4\" (UniqueName: \"kubernetes.io/projected/f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8-kube-api-access-qbkp4\") pod \"coredns-7db6d8ff4d-vzc8x\" (UID: \"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8\") " pod="kube-system/coredns-7db6d8ff4d-vzc8x" Sep 4 17:12:58.102956 kubelet[2534]: I0904 17:12:58.102944 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf7t4\" (UniqueName: \"kubernetes.io/projected/c07d999d-3f10-4208-b55b-998487be89b5-kube-api-access-mf7t4\") pod \"calico-kube-controllers-58c64dfb6f-8tpfw\" (UID: \"c07d999d-3f10-4208-b55b-998487be89b5\") " pod="calico-system/calico-kube-controllers-58c64dfb6f-8tpfw" Sep 4 17:12:58.102981 kubelet[2534]: I0904 17:12:58.102970 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8-config-volume\") pod \"coredns-7db6d8ff4d-vzc8x\" (UID: \"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8\") " pod="kube-system/coredns-7db6d8ff4d-vzc8x" Sep 4 17:12:58.312638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82833e3544a2c0df407e20d41f815fee578269b9cd9f6f6fd9ef48753c73d7f7-rootfs.mount: Deactivated successfully. Sep 4 17:12:58.313683 kubelet[2534]: E0904 17:12:58.313433 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:58.321559 containerd[1431]: time="2024-09-04T17:12:58.319529870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:12:58.321559 containerd[1431]: time="2024-09-04T17:12:58.320622431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mhfkm,Uid:ce4d0287-3bcd-46a6-ab21-8867f52fec21,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:58.321893 kubelet[2534]: E0904 17:12:58.319809 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:58.326807 containerd[1431]: time="2024-09-04T17:12:58.326768878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c64dfb6f-8tpfw,Uid:c07d999d-3f10-4208-b55b-998487be89b5,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:58.332905 kubelet[2534]: E0904 17:12:58.332023 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:12:58.333741 containerd[1431]: time="2024-09-04T17:12:58.333685645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vzc8x,Uid:f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:58.752747 containerd[1431]: time="2024-09-04T17:12:58.752539537Z" level=error msg="Failed to destroy network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.753591 containerd[1431]: time="2024-09-04T17:12:58.752979978Z" level=error msg="Failed to destroy network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.753591 containerd[1431]: time="2024-09-04T17:12:58.753385458Z" level=error msg="Failed to destroy network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.754405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790-shm.mount: Deactivated successfully. Sep 4 17:12:58.754509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5-shm.mount: Deactivated successfully. Sep 4 17:12:58.755421 containerd[1431]: time="2024-09-04T17:12:58.755057940Z" level=error msg="encountered an error cleaning up failed sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.755421 containerd[1431]: time="2024-09-04T17:12:58.755153180Z" level=error msg="encountered an error cleaning up failed sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.755421 containerd[1431]: time="2024-09-04T17:12:58.755409180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vzc8x,Uid:f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.755421 containerd[1431]: time="2024-09-04T17:12:58.755409260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c64dfb6f-8tpfw,Uid:c07d999d-3f10-4208-b55b-998487be89b5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.755421 containerd[1431]: time="2024-09-04T17:12:58.755567180Z" level=error msg="encountered an error cleaning up failed sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.755421 containerd[1431]: time="2024-09-04T17:12:58.755610100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mhfkm,Uid:ce4d0287-3bcd-46a6-ab21-8867f52fec21,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.757238 kubelet[2534]: E0904 17:12:58.756391 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.757238 kubelet[2534]: E0904 17:12:58.756476 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c64dfb6f-8tpfw" Sep 4 17:12:58.757238 kubelet[2534]: E0904 17:12:58.756496 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-58c64dfb6f-8tpfw" Sep 4 17:12:58.757238 kubelet[2534]: E0904 17:12:58.756388 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.757426 kubelet[2534]: E0904 17:12:58.756551 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vzc8x" Sep 4 17:12:58.757426 kubelet[2534]: E0904 17:12:58.756569 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-vzc8x" Sep 4 17:12:58.757426 kubelet[2534]: E0904 17:12:58.756576 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-58c64dfb6f-8tpfw_calico-system(c07d999d-3f10-4208-b55b-998487be89b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-58c64dfb6f-8tpfw_calico-system(c07d999d-3f10-4208-b55b-998487be89b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58c64dfb6f-8tpfw" podUID="c07d999d-3f10-4208-b55b-998487be89b5" Sep 4 17:12:58.757522 kubelet[2534]: E0904 17:12:58.756601 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-vzc8x_kube-system(f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-vzc8x_kube-system(f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vzc8x" podUID="f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8" Sep 4 17:12:58.758676 kubelet[2534]: E0904 17:12:58.758625 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:58.758733 kubelet[2534]: E0904 17:12:58.758690 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mhfkm" Sep 4 17:12:58.758733 kubelet[2534]: E0904 17:12:58.758710 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-mhfkm" Sep 4 17:12:58.758858 kubelet[2534]: E0904 17:12:58.758811 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-mhfkm_kube-system(ce4d0287-3bcd-46a6-ab21-8867f52fec21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-mhfkm_kube-system(ce4d0287-3bcd-46a6-ab21-8867f52fec21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mhfkm" podUID="ce4d0287-3bcd-46a6-ab21-8867f52fec21" Sep 4 17:12:59.224422 systemd[1]: Created slice kubepods-besteffort-poddbe20c7a_4d25_4a1c_ab36_3d1bda88df08.slice - libcontainer container kubepods-besteffort-poddbe20c7a_4d25_4a1c_ab36_3d1bda88df08.slice. Sep 4 17:12:59.229932 containerd[1431]: time="2024-09-04T17:12:59.229570756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjnkk,Uid:dbe20c7a-4d25-4a1c-ab36-3d1bda88df08,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:59.307181 containerd[1431]: time="2024-09-04T17:12:59.307102915Z" level=error msg="Failed to destroy network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.308172 containerd[1431]: time="2024-09-04T17:12:59.308064916Z" level=error msg="encountered an error cleaning up failed sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.308172 containerd[1431]: time="2024-09-04T17:12:59.308123116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjnkk,Uid:dbe20c7a-4d25-4a1c-ab36-3d1bda88df08,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.308995 kubelet[2534]: E0904 17:12:59.308647 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.308995 kubelet[2534]: E0904 17:12:59.308702 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjnkk" Sep 4 17:12:59.308995 kubelet[2534]: E0904 17:12:59.308722 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sjnkk" Sep 4 17:12:59.309139 kubelet[2534]: E0904 17:12:59.308775 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sjnkk_calico-system(dbe20c7a-4d25-4a1c-ab36-3d1bda88df08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sjnkk_calico-system(dbe20c7a-4d25-4a1c-ab36-3d1bda88df08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjnkk" podUID="dbe20c7a-4d25-4a1c-ab36-3d1bda88df08" Sep 4 17:12:59.310258 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3-shm.mount: Deactivated successfully. Sep 4 17:12:59.321505 kubelet[2534]: I0904 17:12:59.321473 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:12:59.322255 containerd[1431]: time="2024-09-04T17:12:59.322183370Z" level=info msg="StopPodSandbox for \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\"" Sep 4 17:12:59.322522 containerd[1431]: time="2024-09-04T17:12:59.322501570Z" level=info msg="Ensure that sandbox 66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5 in task-service has been cleanup successfully" Sep 4 17:12:59.328813 kubelet[2534]: I0904 17:12:59.328741 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:12:59.329366 containerd[1431]: time="2024-09-04T17:12:59.329295897Z" level=info msg="StopPodSandbox for \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\"" Sep 4 17:12:59.329727 containerd[1431]: time="2024-09-04T17:12:59.329478338Z" level=info msg="Ensure that sandbox 91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e in task-service has been cleanup successfully" Sep 4 17:12:59.330168 kubelet[2534]: I0904 17:12:59.329652 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:12:59.330255 containerd[1431]: time="2024-09-04T17:12:59.330082218Z" level=info msg="StopPodSandbox for \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\"" Sep 4 17:12:59.330255 containerd[1431]: time="2024-09-04T17:12:59.330236858Z" level=info msg="Ensure that sandbox f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3 in task-service has been cleanup successfully" Sep 4 17:12:59.338882 kubelet[2534]: I0904 17:12:59.338779 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:12:59.340005 containerd[1431]: time="2024-09-04T17:12:59.339521148Z" level=info msg="StopPodSandbox for \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\"" Sep 4 17:12:59.340005 containerd[1431]: time="2024-09-04T17:12:59.339705948Z" level=info msg="Ensure that sandbox 2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790 in task-service has been cleanup successfully" Sep 4 17:12:59.384595 containerd[1431]: time="2024-09-04T17:12:59.384543793Z" level=error msg="StopPodSandbox for \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\" failed" error="failed to destroy network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.389820 kubelet[2534]: E0904 17:12:59.389413 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:12:59.389820 kubelet[2534]: E0904 17:12:59.389498 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790"} Sep 4 17:12:59.389820 kubelet[2534]: E0904 17:12:59.389589 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce4d0287-3bcd-46a6-ab21-8867f52fec21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:59.389820 kubelet[2534]: E0904 17:12:59.389615 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce4d0287-3bcd-46a6-ab21-8867f52fec21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-mhfkm" podUID="ce4d0287-3bcd-46a6-ab21-8867f52fec21" Sep 4 17:12:59.393176 containerd[1431]: time="2024-09-04T17:12:59.393075842Z" level=error msg="StopPodSandbox for \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\" failed" error="failed to destroy network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.393480 kubelet[2534]: E0904 17:12:59.393305 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:12:59.393480 kubelet[2534]: E0904 17:12:59.393365 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3"} Sep 4 17:12:59.393480 kubelet[2534]: E0904 17:12:59.393399 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:59.393480 kubelet[2534]: E0904 17:12:59.393420 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-vzc8x" podUID="f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8" Sep 4 17:12:59.412011 containerd[1431]: time="2024-09-04T17:12:59.411950381Z" level=error msg="StopPodSandbox for \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\" failed" error="failed to destroy network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.412444 kubelet[2534]: E0904 17:12:59.412180 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:12:59.412444 kubelet[2534]: E0904 17:12:59.412237 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e"} Sep 4 17:12:59.412444 kubelet[2534]: E0904 17:12:59.412273 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:59.412444 kubelet[2534]: E0904 17:12:59.412294 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sjnkk" podUID="dbe20c7a-4d25-4a1c-ab36-3d1bda88df08" Sep 4 17:12:59.421571 containerd[1431]: time="2024-09-04T17:12:59.421516551Z" level=error msg="StopPodSandbox for \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\" failed" error="failed to destroy network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:59.422029 kubelet[2534]: E0904 17:12:59.421884 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:12:59.422029 kubelet[2534]: E0904 17:12:59.421932 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5"} Sep 4 17:12:59.422029 kubelet[2534]: E0904 17:12:59.421965 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c07d999d-3f10-4208-b55b-998487be89b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:59.422029 kubelet[2534]: E0904 17:12:59.421987 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c07d999d-3f10-4208-b55b-998487be89b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-58c64dfb6f-8tpfw" podUID="c07d999d-3f10-4208-b55b-998487be89b5" Sep 4 17:13:01.372620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689093128.mount: Deactivated successfully. Sep 4 17:13:01.592357 containerd[1431]: time="2024-09-04T17:13:01.592027091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:01.594895 containerd[1431]: time="2024-09-04T17:13:01.594840893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:13:01.604447 containerd[1431]: time="2024-09-04T17:13:01.604386342Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:01.605352 containerd[1431]: time="2024-09-04T17:13:01.605301063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.285725313s" Sep 4 17:13:01.605400 containerd[1431]: time="2024-09-04T17:13:01.605352983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:13:01.606007 containerd[1431]: time="2024-09-04T17:13:01.605969023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:01.620037 containerd[1431]: time="2024-09-04T17:13:01.619987116Z" level=info msg="CreateContainer within sandbox \"339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:13:01.649185 containerd[1431]: time="2024-09-04T17:13:01.649047622Z" level=info msg="CreateContainer within sandbox \"339980c07d3b3eb0c0aaa8eb815cefd818f521ebf2c8b13d65b77aa00deab4e6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ad9ea4be092f336e6a55bc7fe9663149cb1c6011ab93979fb94b3ed487bf378c\"" Sep 4 17:13:01.649712 containerd[1431]: time="2024-09-04T17:13:01.649642222Z" level=info msg="StartContainer for \"ad9ea4be092f336e6a55bc7fe9663149cb1c6011ab93979fb94b3ed487bf378c\"" Sep 4 17:13:01.705442 systemd[1]: Started cri-containerd-ad9ea4be092f336e6a55bc7fe9663149cb1c6011ab93979fb94b3ed487bf378c.scope - libcontainer container ad9ea4be092f336e6a55bc7fe9663149cb1c6011ab93979fb94b3ed487bf378c. Sep 4 17:13:01.791511 containerd[1431]: time="2024-09-04T17:13:01.790988188Z" level=info msg="StartContainer for \"ad9ea4be092f336e6a55bc7fe9663149cb1c6011ab93979fb94b3ed487bf378c\" returns successfully" Sep 4 17:13:01.978833 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:13:01.979011 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:13:02.347278 kubelet[2534]: E0904 17:13:02.347165 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:02.359771 kubelet[2534]: I0904 17:13:02.359713 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s8m2c" podStartSLOduration=2.168783978 podStartE2EDuration="12.359696873s" podCreationTimestamp="2024-09-04 17:12:50 +0000 UTC" firstStartedPulling="2024-09-04 17:12:51.415919929 +0000 UTC m=+21.296582954" lastFinishedPulling="2024-09-04 17:13:01.606832824 +0000 UTC m=+31.487495849" observedRunningTime="2024-09-04 17:13:02.359617193 +0000 UTC m=+32.240280218" watchObservedRunningTime="2024-09-04 17:13:02.359696873 +0000 UTC m=+32.240359858" Sep 4 17:13:03.350798 kubelet[2534]: E0904 17:13:03.350684 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:04.537486 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:34614.service - OpenSSH per-connection server daemon (10.0.0.1:34614). Sep 4 17:13:04.582673 sshd[3759]: Accepted publickey for core from 10.0.0.1 port 34614 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:04.584162 sshd[3759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:04.588819 systemd-logind[1417]: New session 8 of user core. Sep 4 17:13:04.598417 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:13:04.874895 sshd[3759]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:04.879255 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:34614.service: Deactivated successfully. Sep 4 17:13:04.881006 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:13:04.882824 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:13:04.883688 systemd-logind[1417]: Removed session 8. Sep 4 17:13:09.902587 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:34624.service - OpenSSH per-connection server daemon (10.0.0.1:34624). Sep 4 17:13:09.939211 sshd[3911]: Accepted publickey for core from 10.0.0.1 port 34624 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:09.940879 sshd[3911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:09.946655 systemd-logind[1417]: New session 9 of user core. Sep 4 17:13:09.955460 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:13:10.086401 sshd[3911]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:10.090840 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:34624.service: Deactivated successfully. Sep 4 17:13:10.094791 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:13:10.095666 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:13:10.097711 systemd-logind[1417]: Removed session 9. Sep 4 17:13:10.214649 containerd[1431]: time="2024-09-04T17:13:10.214446459Z" level=info msg="StopPodSandbox for \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\"" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.298 [INFO][3941] k8s.go 608: Cleaning up netns ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.299 [INFO][3941] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" iface="eth0" netns="/var/run/netns/cni-0e7597ab-3fb3-509b-677c-39123c6052fe" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.299 [INFO][3941] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" iface="eth0" netns="/var/run/netns/cni-0e7597ab-3fb3-509b-677c-39123c6052fe" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.300 [INFO][3941] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" iface="eth0" netns="/var/run/netns/cni-0e7597ab-3fb3-509b-677c-39123c6052fe" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.300 [INFO][3941] k8s.go 615: Releasing IP address(es) ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.300 [INFO][3941] utils.go 188: Calico CNI releasing IP address ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.425 [INFO][3950] ipam_plugin.go 417: Releasing address using handleID ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.425 [INFO][3950] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.425 [INFO][3950] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.435 [WARNING][3950] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.435 [INFO][3950] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.437 [INFO][3950] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:10.440721 containerd[1431]: 2024-09-04 17:13:10.439 [INFO][3941] k8s.go 621: Teardown processing complete. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:10.443721 containerd[1431]: time="2024-09-04T17:13:10.443301733Z" level=info msg="TearDown network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\" successfully" Sep 4 17:13:10.443721 containerd[1431]: time="2024-09-04T17:13:10.443332773Z" level=info msg="StopPodSandbox for \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\" returns successfully" Sep 4 17:13:10.443088 systemd[1]: run-netns-cni\x2d0e7597ab\x2d3fb3\x2d509b\x2d677c\x2d39123c6052fe.mount: Deactivated successfully. Sep 4 17:13:10.444115 kubelet[2534]: E0904 17:13:10.443993 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:10.445177 containerd[1431]: time="2024-09-04T17:13:10.444349254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vzc8x,Uid:f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8,Namespace:kube-system,Attempt:1,}" Sep 4 17:13:10.585163 systemd-networkd[1366]: cali81ad30d86ad: Link UP Sep 4 17:13:10.585782 systemd-networkd[1366]: cali81ad30d86ad: Gained carrier Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.477 [INFO][3960] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.491 [INFO][3960] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0 coredns-7db6d8ff4d- kube-system f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8 761 0 2024-09-04 17:12:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-vzc8x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81ad30d86ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.491 [INFO][3960] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.530 [INFO][3974] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" HandleID="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.544 [INFO][3974] ipam_plugin.go 270: Auto assigning IP ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" HandleID="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000613ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-vzc8x", "timestamp":"2024-09-04 17:13:10.530392817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.544 [INFO][3974] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.544 [INFO][3974] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.544 [INFO][3974] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.547 [INFO][3974] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.553 [INFO][3974] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.558 [INFO][3974] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.560 [INFO][3974] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.562 [INFO][3974] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.563 [INFO][3974] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.566 [INFO][3974] ipam.go 1685: Creating new handle: k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878 Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.570 [INFO][3974] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.577 [INFO][3974] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.577 [INFO][3974] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" host="localhost" Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.577 [INFO][3974] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:10.607523 containerd[1431]: 2024-09-04 17:13:10.577 [INFO][3974] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" HandleID="k8s-pod-network.8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.608088 containerd[1431]: 2024-09-04 17:13:10.579 [INFO][3960] k8s.go 386: Populated endpoint ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-vzc8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81ad30d86ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:10.608088 containerd[1431]: 2024-09-04 17:13:10.579 [INFO][3960] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.608088 containerd[1431]: 2024-09-04 17:13:10.579 [INFO][3960] dataplane_linux.go 68: Setting the host side veth name to cali81ad30d86ad ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.608088 containerd[1431]: 2024-09-04 17:13:10.586 [INFO][3960] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.608088 containerd[1431]: 2024-09-04 17:13:10.586 [INFO][3960] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878", Pod:"coredns-7db6d8ff4d-vzc8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81ad30d86ad", MAC:"ee:85:2c:5e:13:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:10.608088 containerd[1431]: 2024-09-04 17:13:10.605 [INFO][3960] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878" Namespace="kube-system" Pod="coredns-7db6d8ff4d-vzc8x" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:10.628112 containerd[1431]: time="2024-09-04T17:13:10.626868225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:10.628112 containerd[1431]: time="2024-09-04T17:13:10.626929345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:10.628112 containerd[1431]: time="2024-09-04T17:13:10.626963265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:10.628112 containerd[1431]: time="2024-09-04T17:13:10.627069265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:10.653478 systemd[1]: Started cri-containerd-8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878.scope - libcontainer container 8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878. Sep 4 17:13:10.664641 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:13:10.689393 containerd[1431]: time="2024-09-04T17:13:10.689338656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-vzc8x,Uid:f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8,Namespace:kube-system,Attempt:1,} returns sandbox id \"8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878\"" Sep 4 17:13:10.690444 kubelet[2534]: E0904 17:13:10.690250 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:10.702066 containerd[1431]: time="2024-09-04T17:13:10.701993662Z" level=info msg="CreateContainer within sandbox \"8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:13:10.720802 containerd[1431]: time="2024-09-04T17:13:10.720740191Z" level=info msg="CreateContainer within sandbox \"8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5781c0aca0a9a51aacfbca322c7a9d20cec65931645d9806ec408fa3be2f497\"" Sep 4 17:13:10.722388 containerd[1431]: time="2024-09-04T17:13:10.721362552Z" level=info msg="StartContainer for \"f5781c0aca0a9a51aacfbca322c7a9d20cec65931645d9806ec408fa3be2f497\"" Sep 4 17:13:10.748449 systemd[1]: Started cri-containerd-f5781c0aca0a9a51aacfbca322c7a9d20cec65931645d9806ec408fa3be2f497.scope - libcontainer container f5781c0aca0a9a51aacfbca322c7a9d20cec65931645d9806ec408fa3be2f497. Sep 4 17:13:10.786661 containerd[1431]: time="2024-09-04T17:13:10.786527624Z" level=info msg="StartContainer for \"f5781c0aca0a9a51aacfbca322c7a9d20cec65931645d9806ec408fa3be2f497\" returns successfully" Sep 4 17:13:11.382244 kubelet[2534]: E0904 17:13:11.381718 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:11.445574 systemd[1]: run-containerd-runc-k8s.io-8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878-runc.artZa6.mount: Deactivated successfully. Sep 4 17:13:11.597291 kubelet[2534]: I0904 17:13:11.596159 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-vzc8x" podStartSLOduration=26.596139848 podStartE2EDuration="26.596139848s" podCreationTimestamp="2024-09-04 17:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:13:11.524484855 +0000 UTC m=+41.405147920" watchObservedRunningTime="2024-09-04 17:13:11.596139848 +0000 UTC m=+41.476802873" Sep 4 17:13:11.833935 systemd-networkd[1366]: cali81ad30d86ad: Gained IPv6LL Sep 4 17:13:12.214002 containerd[1431]: time="2024-09-04T17:13:12.213780010Z" level=info msg="StopPodSandbox for \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\"" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.276 [INFO][4144] k8s.go 608: Cleaning up netns ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.276 [INFO][4144] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" iface="eth0" netns="/var/run/netns/cni-13ee0f78-9623-76bb-af92-a20b4718b118" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.277 [INFO][4144] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" iface="eth0" netns="/var/run/netns/cni-13ee0f78-9623-76bb-af92-a20b4718b118" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.277 [INFO][4144] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" iface="eth0" netns="/var/run/netns/cni-13ee0f78-9623-76bb-af92-a20b4718b118" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.277 [INFO][4144] k8s.go 615: Releasing IP address(es) ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.278 [INFO][4144] utils.go 188: Calico CNI releasing IP address ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.306 [INFO][4151] ipam_plugin.go 417: Releasing address using handleID ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.306 [INFO][4151] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.306 [INFO][4151] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.315 [WARNING][4151] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.315 [INFO][4151] ipam_plugin.go 445: Releasing address using workloadID ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.317 [INFO][4151] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:12.320775 containerd[1431]: 2024-09-04 17:13:12.319 [INFO][4144] k8s.go 621: Teardown processing complete. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:12.323738 containerd[1431]: time="2024-09-04T17:13:12.322325537Z" level=info msg="TearDown network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\" successfully" Sep 4 17:13:12.323738 containerd[1431]: time="2024-09-04T17:13:12.322361897Z" level=info msg="StopPodSandbox for \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\" returns successfully" Sep 4 17:13:12.323069 systemd[1]: run-netns-cni\x2d13ee0f78\x2d9623\x2d76bb\x2daf92\x2da20b4718b118.mount: Deactivated successfully. Sep 4 17:13:12.325083 containerd[1431]: time="2024-09-04T17:13:12.324645618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjnkk,Uid:dbe20c7a-4d25-4a1c-ab36-3d1bda88df08,Namespace:calico-system,Attempt:1,}" Sep 4 17:13:12.385612 kubelet[2534]: E0904 17:13:12.385582 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:12.461036 systemd-networkd[1366]: calie4916c28f78: Link UP Sep 4 17:13:12.461743 systemd-networkd[1366]: calie4916c28f78: Gained carrier Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.361 [INFO][4160] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.376 [INFO][4160] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sjnkk-eth0 csi-node-driver- calico-system dbe20c7a-4d25-4a1c-ab36-3d1bda88df08 793 0 2024-09-04 17:12:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-sjnkk eth0 default [] [] [kns.calico-system ksa.calico-system.default] calie4916c28f78 [] []}} ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.376 [INFO][4160] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.404 [INFO][4174] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" HandleID="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.427 [INFO][4174] ipam_plugin.go 270: Auto assigning IP ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" HandleID="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002789b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sjnkk", "timestamp":"2024-09-04 17:13:12.404242493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.427 [INFO][4174] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.427 [INFO][4174] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.427 [INFO][4174] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.429 [INFO][4174] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.437 [INFO][4174] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.443 [INFO][4174] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.444 [INFO][4174] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.447 [INFO][4174] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.447 [INFO][4174] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.448 [INFO][4174] ipam.go 1685: Creating new handle: k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493 Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.452 [INFO][4174] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.456 [INFO][4174] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.456 [INFO][4174] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" host="localhost" Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.456 [INFO][4174] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:12.473684 containerd[1431]: 2024-09-04 17:13:12.456 [INFO][4174] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" HandleID="k8s-pod-network.01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.474253 containerd[1431]: 2024-09-04 17:13:12.459 [INFO][4160] k8s.go 386: Populated endpoint ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sjnkk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sjnkk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie4916c28f78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:12.474253 containerd[1431]: 2024-09-04 17:13:12.459 [INFO][4160] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.474253 containerd[1431]: 2024-09-04 17:13:12.459 [INFO][4160] dataplane_linux.go 68: Setting the host side veth name to calie4916c28f78 ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.474253 containerd[1431]: 2024-09-04 17:13:12.461 [INFO][4160] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.474253 containerd[1431]: 2024-09-04 17:13:12.462 [INFO][4160] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sjnkk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493", Pod:"csi-node-driver-sjnkk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie4916c28f78", MAC:"62:8a:56:a3:e1:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:12.474253 containerd[1431]: 2024-09-04 17:13:12.470 [INFO][4160] k8s.go 500: Wrote updated endpoint to datastore ContainerID="01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493" Namespace="calico-system" Pod="csi-node-driver-sjnkk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:12.493107 containerd[1431]: time="2024-09-04T17:13:12.493008532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:12.493107 containerd[1431]: time="2024-09-04T17:13:12.493072292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:12.493107 containerd[1431]: time="2024-09-04T17:13:12.493083652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:12.493392 containerd[1431]: time="2024-09-04T17:13:12.493167572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:12.522452 systemd[1]: Started cri-containerd-01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493.scope - libcontainer container 01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493. Sep 4 17:13:12.534428 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:13:12.549098 containerd[1431]: time="2024-09-04T17:13:12.549048077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sjnkk,Uid:dbe20c7a-4d25-4a1c-ab36-3d1bda88df08,Namespace:calico-system,Attempt:1,} returns sandbox id \"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493\"" Sep 4 17:13:12.551313 containerd[1431]: time="2024-09-04T17:13:12.551272398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:13:13.214008 containerd[1431]: time="2024-09-04T17:13:13.213888921Z" level=info msg="StopPodSandbox for \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\"" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.264 [INFO][4270] k8s.go 608: Cleaning up netns ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.265 [INFO][4270] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" iface="eth0" netns="/var/run/netns/cni-5f69912c-e611-2970-03cd-c8ee1158bd97" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.265 [INFO][4270] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" iface="eth0" netns="/var/run/netns/cni-5f69912c-e611-2970-03cd-c8ee1158bd97" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.265 [INFO][4270] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" iface="eth0" netns="/var/run/netns/cni-5f69912c-e611-2970-03cd-c8ee1158bd97" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.265 [INFO][4270] k8s.go 615: Releasing IP address(es) ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.265 [INFO][4270] utils.go 188: Calico CNI releasing IP address ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.288 [INFO][4278] ipam_plugin.go 417: Releasing address using handleID ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.288 [INFO][4278] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.288 [INFO][4278] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.297 [WARNING][4278] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.297 [INFO][4278] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.299 [INFO][4278] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:13.302804 containerd[1431]: 2024-09-04 17:13:13.301 [INFO][4270] k8s.go 621: Teardown processing complete. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:13.303556 containerd[1431]: time="2024-09-04T17:13:13.303492158Z" level=info msg="TearDown network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\" successfully" Sep 4 17:13:13.303556 containerd[1431]: time="2024-09-04T17:13:13.303536918Z" level=info msg="StopPodSandbox for \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\" returns successfully" Sep 4 17:13:13.305561 systemd[1]: run-netns-cni\x2d5f69912c\x2de611\x2d2970\x2d03cd\x2dc8ee1158bd97.mount: Deactivated successfully. Sep 4 17:13:13.305844 kubelet[2534]: E0904 17:13:13.305818 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:13.307147 containerd[1431]: time="2024-09-04T17:13:13.307111760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mhfkm,Uid:ce4d0287-3bcd-46a6-ab21-8867f52fec21,Namespace:kube-system,Attempt:1,}" Sep 4 17:13:13.389986 kubelet[2534]: E0904 17:13:13.389905 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:13.477507 systemd-networkd[1366]: cali95b1629fe8a: Link UP Sep 4 17:13:13.477755 systemd-networkd[1366]: cali95b1629fe8a: Gained carrier Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.355 [INFO][4288] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.374 [INFO][4288] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0 coredns-7db6d8ff4d- kube-system ce4d0287-3bcd-46a6-ab21-8867f52fec21 803 0 2024-09-04 17:12:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-mhfkm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95b1629fe8a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.374 [INFO][4288] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.416 [INFO][4302] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" HandleID="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.430 [INFO][4302] ipam_plugin.go 270: Auto assigning IP ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" HandleID="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000127d90), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-mhfkm", "timestamp":"2024-09-04 17:13:13.416020444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.430 [INFO][4302] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.430 [INFO][4302] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.430 [INFO][4302] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.432 [INFO][4302] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.440 [INFO][4302] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.446 [INFO][4302] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.448 [INFO][4302] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.450 [INFO][4302] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.450 [INFO][4302] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.451 [INFO][4302] ipam.go 1685: Creating new handle: k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.455 [INFO][4302] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.471 [INFO][4302] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.471 [INFO][4302] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" host="localhost" Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.471 [INFO][4302] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:13.501934 containerd[1431]: 2024-09-04 17:13:13.471 [INFO][4302] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" HandleID="k8s-pod-network.5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.502663 containerd[1431]: 2024-09-04 17:13:13.473 [INFO][4288] k8s.go 386: Populated endpoint ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce4d0287-3bcd-46a6-ab21-8867f52fec21", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-mhfkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b1629fe8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:13.502663 containerd[1431]: 2024-09-04 17:13:13.473 [INFO][4288] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.502663 containerd[1431]: 2024-09-04 17:13:13.473 [INFO][4288] dataplane_linux.go 68: Setting the host side veth name to cali95b1629fe8a ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.502663 containerd[1431]: 2024-09-04 17:13:13.477 [INFO][4288] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.502663 containerd[1431]: 2024-09-04 17:13:13.477 [INFO][4288] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce4d0287-3bcd-46a6-ab21-8867f52fec21", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e", Pod:"coredns-7db6d8ff4d-mhfkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b1629fe8a", MAC:"96:75:1a:d5:5c:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:13.502663 containerd[1431]: 2024-09-04 17:13:13.499 [INFO][4288] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-mhfkm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:13.521094 containerd[1431]: time="2024-09-04T17:13:13.520835167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:13.521094 containerd[1431]: time="2024-09-04T17:13:13.520910207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:13.521094 containerd[1431]: time="2024-09-04T17:13:13.520945487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:13.521094 containerd[1431]: time="2024-09-04T17:13:13.521042727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:13.543462 systemd[1]: Started cri-containerd-5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e.scope - libcontainer container 5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e. Sep 4 17:13:13.556668 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:13:13.580596 containerd[1431]: time="2024-09-04T17:13:13.580553912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-mhfkm,Uid:ce4d0287-3bcd-46a6-ab21-8867f52fec21,Namespace:kube-system,Attempt:1,} returns sandbox id \"5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e\"" Sep 4 17:13:13.582109 kubelet[2534]: E0904 17:13:13.582076 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:13.591817 containerd[1431]: time="2024-09-04T17:13:13.585457594Z" level=info msg="CreateContainer within sandbox \"5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:13:13.615738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872246767.mount: Deactivated successfully. Sep 4 17:13:13.623141 containerd[1431]: time="2024-09-04T17:13:13.623000649Z" level=info msg="CreateContainer within sandbox \"5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4a8cf0ec3f2c4cc6b4e0d2256b2ac3cf5b7463b63e2f8dd8d629ebe7a9ebf6a2\"" Sep 4 17:13:13.624045 containerd[1431]: time="2024-09-04T17:13:13.624015049Z" level=info msg="StartContainer for \"4a8cf0ec3f2c4cc6b4e0d2256b2ac3cf5b7463b63e2f8dd8d629ebe7a9ebf6a2\"" Sep 4 17:13:13.653459 systemd[1]: Started cri-containerd-4a8cf0ec3f2c4cc6b4e0d2256b2ac3cf5b7463b63e2f8dd8d629ebe7a9ebf6a2.scope - libcontainer container 4a8cf0ec3f2c4cc6b4e0d2256b2ac3cf5b7463b63e2f8dd8d629ebe7a9ebf6a2. Sep 4 17:13:13.668819 containerd[1431]: time="2024-09-04T17:13:13.668761908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:13.670584 containerd[1431]: time="2024-09-04T17:13:13.670246348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:13:13.672730 containerd[1431]: time="2024-09-04T17:13:13.672677989Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:13.686351 containerd[1431]: time="2024-09-04T17:13:13.686300875Z" level=info msg="StartContainer for \"4a8cf0ec3f2c4cc6b4e0d2256b2ac3cf5b7463b63e2f8dd8d629ebe7a9ebf6a2\" returns successfully" Sep 4 17:13:13.686711 containerd[1431]: time="2024-09-04T17:13:13.686671395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:13.687435 containerd[1431]: time="2024-09-04T17:13:13.687407235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.136091837s" Sep 4 17:13:13.687729 containerd[1431]: time="2024-09-04T17:13:13.687612876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:13:13.691568 containerd[1431]: time="2024-09-04T17:13:13.691509557Z" level=info msg="CreateContainer within sandbox \"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:13:13.708368 containerd[1431]: time="2024-09-04T17:13:13.708312404Z" level=info msg="CreateContainer within sandbox \"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"721899a849e7c677e7717d1ac980b1ad5805eb25e60c7d5ac9937a14d8257355\"" Sep 4 17:13:13.711959 containerd[1431]: time="2024-09-04T17:13:13.710350045Z" level=info msg="StartContainer for \"721899a849e7c677e7717d1ac980b1ad5805eb25e60c7d5ac9937a14d8257355\"" Sep 4 17:13:13.755433 systemd[1]: Started cri-containerd-721899a849e7c677e7717d1ac980b1ad5805eb25e60c7d5ac9937a14d8257355.scope - libcontainer container 721899a849e7c677e7717d1ac980b1ad5805eb25e60c7d5ac9937a14d8257355. Sep 4 17:13:13.788612 containerd[1431]: time="2024-09-04T17:13:13.788550157Z" level=info msg="StartContainer for \"721899a849e7c677e7717d1ac980b1ad5805eb25e60c7d5ac9937a14d8257355\" returns successfully" Sep 4 17:13:13.791523 containerd[1431]: time="2024-09-04T17:13:13.791480998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:13:13.883360 systemd-networkd[1366]: calie4916c28f78: Gained IPv6LL Sep 4 17:13:14.394136 kubelet[2534]: E0904 17:13:14.393821 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:14.403681 kubelet[2534]: I0904 17:13:14.403309 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-mhfkm" podStartSLOduration=29.403278519 podStartE2EDuration="29.403278519s" podCreationTimestamp="2024-09-04 17:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:13:14.402748398 +0000 UTC m=+44.283411423" watchObservedRunningTime="2024-09-04 17:13:14.403278519 +0000 UTC m=+44.283941544" Sep 4 17:13:14.684389 containerd[1431]: time="2024-09-04T17:13:14.684240707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:14.687963 containerd[1431]: time="2024-09-04T17:13:14.687906228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:13:14.689392 containerd[1431]: time="2024-09-04T17:13:14.689341828Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:14.692093 containerd[1431]: time="2024-09-04T17:13:14.692041230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:14.692761 containerd[1431]: time="2024-09-04T17:13:14.692710190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 901.181752ms" Sep 4 17:13:14.692817 containerd[1431]: time="2024-09-04T17:13:14.692764950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:13:14.700637 containerd[1431]: time="2024-09-04T17:13:14.700588433Z" level=info msg="CreateContainer within sandbox \"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:13:14.720850 containerd[1431]: time="2024-09-04T17:13:14.720792121Z" level=info msg="CreateContainer within sandbox \"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e6c16c1bc36bff0166e29122426f3425e63cca5e0cbc302be4134b1e3c96a03c\"" Sep 4 17:13:14.721718 containerd[1431]: time="2024-09-04T17:13:14.721692081Z" level=info msg="StartContainer for \"e6c16c1bc36bff0166e29122426f3425e63cca5e0cbc302be4134b1e3c96a03c\"" Sep 4 17:13:14.755452 systemd[1]: Started cri-containerd-e6c16c1bc36bff0166e29122426f3425e63cca5e0cbc302be4134b1e3c96a03c.scope - libcontainer container e6c16c1bc36bff0166e29122426f3425e63cca5e0cbc302be4134b1e3c96a03c. Sep 4 17:13:14.792249 containerd[1431]: time="2024-09-04T17:13:14.790778707Z" level=info msg="StartContainer for \"e6c16c1bc36bff0166e29122426f3425e63cca5e0cbc302be4134b1e3c96a03c\" returns successfully" Sep 4 17:13:15.098444 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:36690.service - OpenSSH per-connection server daemon (10.0.0.1:36690). Sep 4 17:13:15.173554 sshd[4536]: Accepted publickey for core from 10.0.0.1 port 36690 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:15.175643 sshd[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:15.180747 systemd-logind[1417]: New session 10 of user core. Sep 4 17:13:15.190482 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:13:15.214220 containerd[1431]: time="2024-09-04T17:13:15.214162425Z" level=info msg="StopPodSandbox for \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\"" Sep 4 17:13:15.333863 kubelet[2534]: I0904 17:13:15.332014 2534 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:13:15.336015 kubelet[2534]: I0904 17:13:15.335983 2534 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.269 [INFO][4555] k8s.go 608: Cleaning up netns ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.269 [INFO][4555] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" iface="eth0" netns="/var/run/netns/cni-81f8d0e1-523c-b861-5d16-c9de2f21d7e3" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.269 [INFO][4555] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" iface="eth0" netns="/var/run/netns/cni-81f8d0e1-523c-b861-5d16-c9de2f21d7e3" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.269 [INFO][4555] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" iface="eth0" netns="/var/run/netns/cni-81f8d0e1-523c-b861-5d16-c9de2f21d7e3" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.270 [INFO][4555] k8s.go 615: Releasing IP address(es) ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.270 [INFO][4555] utils.go 188: Calico CNI releasing IP address ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.291 [INFO][4571] ipam_plugin.go 417: Releasing address using handleID ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.291 [INFO][4571] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.292 [INFO][4571] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.339 [WARNING][4571] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.339 [INFO][4571] ipam_plugin.go 445: Releasing address using workloadID ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.342 [INFO][4571] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:15.351156 containerd[1431]: 2024-09-04 17:13:15.348 [INFO][4555] k8s.go 621: Teardown processing complete. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:15.351156 containerd[1431]: time="2024-09-04T17:13:15.350977674Z" level=info msg="TearDown network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\" successfully" Sep 4 17:13:15.351156 containerd[1431]: time="2024-09-04T17:13:15.351007554Z" level=info msg="StopPodSandbox for \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\" returns successfully" Sep 4 17:13:15.352169 containerd[1431]: time="2024-09-04T17:13:15.352130715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c64dfb6f-8tpfw,Uid:c07d999d-3f10-4208-b55b-998487be89b5,Namespace:calico-system,Attempt:1,}" Sep 4 17:13:15.400183 kubelet[2534]: E0904 17:13:15.400127 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:15.434753 kubelet[2534]: I0904 17:13:15.434678 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sjnkk" podStartSLOduration=22.291979551 podStartE2EDuration="24.434653984s" podCreationTimestamp="2024-09-04 17:12:51 +0000 UTC" firstStartedPulling="2024-09-04 17:13:12.550861437 +0000 UTC m=+42.431524422" lastFinishedPulling="2024-09-04 17:13:14.69353583 +0000 UTC m=+44.574198855" observedRunningTime="2024-09-04 17:13:15.432596904 +0000 UTC m=+45.313260049" watchObservedRunningTime="2024-09-04 17:13:15.434653984 +0000 UTC m=+45.315317049" Sep 4 17:13:15.480098 sshd[4536]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:15.481607 systemd-networkd[1366]: cali95b1629fe8a: Gained IPv6LL Sep 4 17:13:15.490836 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:36690.service: Deactivated successfully. Sep 4 17:13:15.493335 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:13:15.496075 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:13:15.498619 systemd-logind[1417]: Removed session 10. Sep 4 17:13:15.507643 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:36692.service - OpenSSH per-connection server daemon (10.0.0.1:36692). Sep 4 17:13:15.527691 systemd[1]: run-netns-cni\x2d81f8d0e1\x2d523c\x2db861\x2d5d16\x2dc9de2f21d7e3.mount: Deactivated successfully. Sep 4 17:13:15.546268 sshd[4598]: Accepted publickey for core from 10.0.0.1 port 36692 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:15.548068 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:15.553994 systemd-logind[1417]: New session 11 of user core. Sep 4 17:13:15.564435 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:13:15.645039 systemd-networkd[1366]: cali2395b9049e4: Link UP Sep 4 17:13:15.645340 systemd-networkd[1366]: cali2395b9049e4: Gained carrier Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.495 [INFO][4583] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.549 [INFO][4583] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0 calico-kube-controllers-58c64dfb6f- calico-system c07d999d-3f10-4208-b55b-998487be89b5 837 0 2024-09-04 17:12:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:58c64dfb6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-58c64dfb6f-8tpfw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2395b9049e4 [] []}} ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.549 [INFO][4583] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.582 [INFO][4601] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" HandleID="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.606 [INFO][4601] ipam_plugin.go 270: Auto assigning IP ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" HandleID="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000364340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-58c64dfb6f-8tpfw", "timestamp":"2024-09-04 17:13:15.582803758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.606 [INFO][4601] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.606 [INFO][4601] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.606 [INFO][4601] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.608 [INFO][4601] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.614 [INFO][4601] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.620 [INFO][4601] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.623 [INFO][4601] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.626 [INFO][4601] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.626 [INFO][4601] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.627 [INFO][4601] ipam.go 1685: Creating new handle: k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.631 [INFO][4601] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.640 [INFO][4601] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.640 [INFO][4601] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" host="localhost" Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.640 [INFO][4601] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:15.664397 containerd[1431]: 2024-09-04 17:13:15.640 [INFO][4601] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" HandleID="k8s-pod-network.b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.665036 containerd[1431]: 2024-09-04 17:13:15.642 [INFO][4583] k8s.go 386: Populated endpoint ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0", GenerateName:"calico-kube-controllers-58c64dfb6f-", Namespace:"calico-system", SelfLink:"", UID:"c07d999d-3f10-4208-b55b-998487be89b5", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c64dfb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-58c64dfb6f-8tpfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2395b9049e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:15.665036 containerd[1431]: 2024-09-04 17:13:15.642 [INFO][4583] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.665036 containerd[1431]: 2024-09-04 17:13:15.642 [INFO][4583] dataplane_linux.go 68: Setting the host side veth name to cali2395b9049e4 ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.665036 containerd[1431]: 2024-09-04 17:13:15.645 [INFO][4583] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.665036 containerd[1431]: 2024-09-04 17:13:15.645 [INFO][4583] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0", GenerateName:"calico-kube-controllers-58c64dfb6f-", Namespace:"calico-system", SelfLink:"", UID:"c07d999d-3f10-4208-b55b-998487be89b5", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c64dfb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f", Pod:"calico-kube-controllers-58c64dfb6f-8tpfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2395b9049e4", MAC:"d6:37:4e:d4:7a:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:15.665036 containerd[1431]: 2024-09-04 17:13:15.661 [INFO][4583] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f" Namespace="calico-system" Pod="calico-kube-controllers-58c64dfb6f-8tpfw" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:15.688319 containerd[1431]: time="2024-09-04T17:13:15.688169916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:15.689284 containerd[1431]: time="2024-09-04T17:13:15.688327796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:15.689284 containerd[1431]: time="2024-09-04T17:13:15.688808196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:15.689284 containerd[1431]: time="2024-09-04T17:13:15.688923676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:15.717489 systemd[1]: Started cri-containerd-b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f.scope - libcontainer container b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f. Sep 4 17:13:15.732722 systemd-resolved[1299]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:13:15.755097 containerd[1431]: time="2024-09-04T17:13:15.755055220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-58c64dfb6f-8tpfw,Uid:c07d999d-3f10-4208-b55b-998487be89b5,Namespace:calico-system,Attempt:1,} returns sandbox id \"b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f\"" Sep 4 17:13:15.757057 containerd[1431]: time="2024-09-04T17:13:15.757014381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:13:15.836779 sshd[4598]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:15.853236 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:36692.service: Deactivated successfully. Sep 4 17:13:15.859821 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:13:15.865507 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:13:15.874017 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:36696.service - OpenSSH per-connection server daemon (10.0.0.1:36696). Sep 4 17:13:15.877993 systemd-logind[1417]: Removed session 11. Sep 4 17:13:15.942986 sshd[4670]: Accepted publickey for core from 10.0.0.1 port 36696 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:15.943606 sshd[4670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:15.951915 systemd-logind[1417]: New session 12 of user core. Sep 4 17:13:15.963427 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:13:16.198360 sshd[4670]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:16.201905 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:13:16.202601 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:36696.service: Deactivated successfully. Sep 4 17:13:16.206900 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:13:16.208445 systemd-logind[1417]: Removed session 12. Sep 4 17:13:17.146604 containerd[1431]: time="2024-09-04T17:13:17.146528412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:17.151689 containerd[1431]: time="2024-09-04T17:13:17.149077893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:13:17.151689 containerd[1431]: time="2024-09-04T17:13:17.150286013Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:17.153406 containerd[1431]: time="2024-09-04T17:13:17.153346294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:17.154126 containerd[1431]: time="2024-09-04T17:13:17.154082055Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.397017834s" Sep 4 17:13:17.154126 containerd[1431]: time="2024-09-04T17:13:17.154124095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:13:17.170023 containerd[1431]: time="2024-09-04T17:13:17.169957380Z" level=info msg="CreateContainer within sandbox \"b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:13:17.199793 containerd[1431]: time="2024-09-04T17:13:17.199741709Z" level=info msg="CreateContainer within sandbox \"b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2a2b88deef30a2e2320e5841db7a490946b8fa03947d707f70a632cc05d0ddff\"" Sep 4 17:13:17.200646 containerd[1431]: time="2024-09-04T17:13:17.200554789Z" level=info msg="StartContainer for \"2a2b88deef30a2e2320e5841db7a490946b8fa03947d707f70a632cc05d0ddff\"" Sep 4 17:13:17.259412 systemd[1]: Started cri-containerd-2a2b88deef30a2e2320e5841db7a490946b8fa03947d707f70a632cc05d0ddff.scope - libcontainer container 2a2b88deef30a2e2320e5841db7a490946b8fa03947d707f70a632cc05d0ddff. Sep 4 17:13:17.307794 containerd[1431]: time="2024-09-04T17:13:17.307738423Z" level=info msg="StartContainer for \"2a2b88deef30a2e2320e5841db7a490946b8fa03947d707f70a632cc05d0ddff\" returns successfully" Sep 4 17:13:17.401512 systemd-networkd[1366]: cali2395b9049e4: Gained IPv6LL Sep 4 17:13:17.476836 kubelet[2534]: I0904 17:13:17.476702 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-58c64dfb6f-8tpfw" podStartSLOduration=25.077596722 podStartE2EDuration="26.476670357s" podCreationTimestamp="2024-09-04 17:12:51 +0000 UTC" firstStartedPulling="2024-09-04 17:13:15.7567223 +0000 UTC m=+45.637385325" lastFinishedPulling="2024-09-04 17:13:17.155795975 +0000 UTC m=+47.036458960" observedRunningTime="2024-09-04 17:13:17.42401134 +0000 UTC m=+47.304674365" watchObservedRunningTime="2024-09-04 17:13:17.476670357 +0000 UTC m=+47.357333382" Sep 4 17:13:20.959513 kubelet[2534]: I0904 17:13:20.959362 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:13:20.960186 kubelet[2534]: E0904 17:13:20.960010 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:21.212246 kernel: bpftool[4896]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:13:21.219668 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:36708.service - OpenSSH per-connection server daemon (10.0.0.1:36708). Sep 4 17:13:21.292499 sshd[4902]: Accepted publickey for core from 10.0.0.1 port 36708 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:21.299152 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:21.307999 systemd-logind[1417]: New session 13 of user core. Sep 4 17:13:21.313957 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:13:21.416070 kubelet[2534]: E0904 17:13:21.416031 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:13:21.643634 systemd-networkd[1366]: vxlan.calico: Link UP Sep 4 17:13:21.643640 systemd-networkd[1366]: vxlan.calico: Gained carrier Sep 4 17:13:21.770366 sshd[4902]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:21.779517 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:36708.service: Deactivated successfully. Sep 4 17:13:21.782019 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:13:21.784179 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:13:21.799723 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:36720.service - OpenSSH per-connection server daemon (10.0.0.1:36720). Sep 4 17:13:21.802825 systemd-logind[1417]: Removed session 13. Sep 4 17:13:21.837337 sshd[4997]: Accepted publickey for core from 10.0.0.1 port 36720 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:21.839689 sshd[4997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:21.849552 systemd-logind[1417]: New session 14 of user core. Sep 4 17:13:21.853374 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:13:22.145712 sshd[4997]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:22.153369 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:36720.service: Deactivated successfully. Sep 4 17:13:22.156825 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:13:22.159800 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:13:22.166534 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:36736.service - OpenSSH per-connection server daemon (10.0.0.1:36736). Sep 4 17:13:22.168000 systemd-logind[1417]: Removed session 14. Sep 4 17:13:22.212692 sshd[5042]: Accepted publickey for core from 10.0.0.1 port 36736 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:22.215108 sshd[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:22.225724 systemd-logind[1417]: New session 15 of user core. Sep 4 17:13:22.233432 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:13:23.353365 systemd-networkd[1366]: vxlan.calico: Gained IPv6LL Sep 4 17:13:23.840491 sshd[5042]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:23.847965 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:36736.service: Deactivated successfully. Sep 4 17:13:23.852313 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:13:23.858145 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:13:23.867641 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:49610.service - OpenSSH per-connection server daemon (10.0.0.1:49610). Sep 4 17:13:23.870795 systemd-logind[1417]: Removed session 15. Sep 4 17:13:23.906425 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 49610 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:23.907949 sshd[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:23.913458 systemd-logind[1417]: New session 16 of user core. Sep 4 17:13:23.920893 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:13:24.315790 sshd[5089]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:24.325935 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:49610.service: Deactivated successfully. Sep 4 17:13:24.329627 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:13:24.332344 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:13:24.343659 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:49620.service - OpenSSH per-connection server daemon (10.0.0.1:49620). Sep 4 17:13:24.345335 systemd-logind[1417]: Removed session 16. Sep 4 17:13:24.385477 sshd[5102]: Accepted publickey for core from 10.0.0.1 port 49620 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:24.386051 sshd[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:24.392669 systemd-logind[1417]: New session 17 of user core. Sep 4 17:13:24.408443 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:13:24.582302 sshd[5102]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:24.586344 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:49620.service: Deactivated successfully. Sep 4 17:13:24.588895 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:13:24.589614 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:13:24.590630 systemd-logind[1417]: Removed session 17. Sep 4 17:13:29.593814 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:49626.service - OpenSSH per-connection server daemon (10.0.0.1:49626). Sep 4 17:13:29.647925 sshd[5125]: Accepted publickey for core from 10.0.0.1 port 49626 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:29.649933 sshd[5125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:29.655245 systemd-logind[1417]: New session 18 of user core. Sep 4 17:13:29.662461 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:13:29.891275 sshd[5125]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:29.899416 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:49626.service: Deactivated successfully. Sep 4 17:13:29.903667 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:13:29.904682 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:13:29.905835 systemd-logind[1417]: Removed session 18. Sep 4 17:13:30.209094 containerd[1431]: time="2024-09-04T17:13:30.208977871Z" level=info msg="StopPodSandbox for \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\"" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.258 [WARNING][5155] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0", GenerateName:"calico-kube-controllers-58c64dfb6f-", Namespace:"calico-system", SelfLink:"", UID:"c07d999d-3f10-4208-b55b-998487be89b5", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c64dfb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f", Pod:"calico-kube-controllers-58c64dfb6f-8tpfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2395b9049e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.258 [INFO][5155] k8s.go 608: Cleaning up netns ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.258 [INFO][5155] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" iface="eth0" netns="" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.258 [INFO][5155] k8s.go 615: Releasing IP address(es) ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.258 [INFO][5155] utils.go 188: Calico CNI releasing IP address ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.284 [INFO][5164] ipam_plugin.go 417: Releasing address using handleID ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.284 [INFO][5164] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.284 [INFO][5164] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.293 [WARNING][5164] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.293 [INFO][5164] ipam_plugin.go 445: Releasing address using workloadID ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.295 [INFO][5164] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.299829 containerd[1431]: 2024-09-04 17:13:30.297 [INFO][5155] k8s.go 621: Teardown processing complete. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.300820 containerd[1431]: time="2024-09-04T17:13:30.299872084Z" level=info msg="TearDown network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\" successfully" Sep 4 17:13:30.300820 containerd[1431]: time="2024-09-04T17:13:30.299897124Z" level=info msg="StopPodSandbox for \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\" returns successfully" Sep 4 17:13:30.301981 containerd[1431]: time="2024-09-04T17:13:30.300951684Z" level=info msg="RemovePodSandbox for \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\"" Sep 4 17:13:30.307296 containerd[1431]: time="2024-09-04T17:13:30.307165565Z" level=info msg="Forcibly stopping sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\"" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.359 [WARNING][5187] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0", GenerateName:"calico-kube-controllers-58c64dfb6f-", Namespace:"calico-system", SelfLink:"", UID:"c07d999d-3f10-4208-b55b-998487be89b5", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"58c64dfb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b66eecdece9c63b881bd39d8bfd16a1dadce9897077689b727336b2dfdcee73f", Pod:"calico-kube-controllers-58c64dfb6f-8tpfw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2395b9049e4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.359 [INFO][5187] k8s.go 608: Cleaning up netns ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.359 [INFO][5187] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" iface="eth0" netns="" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.359 [INFO][5187] k8s.go 615: Releasing IP address(es) ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.359 [INFO][5187] utils.go 188: Calico CNI releasing IP address ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.387 [INFO][5194] ipam_plugin.go 417: Releasing address using handleID ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.388 [INFO][5194] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.388 [INFO][5194] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.406 [WARNING][5194] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.406 [INFO][5194] ipam_plugin.go 445: Releasing address using workloadID ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" HandleID="k8s-pod-network.66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Workload="localhost-k8s-calico--kube--controllers--58c64dfb6f--8tpfw-eth0" Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.423 [INFO][5194] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.427289 containerd[1431]: 2024-09-04 17:13:30.425 [INFO][5187] k8s.go 621: Teardown processing complete. ContainerID="66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5" Sep 4 17:13:30.427764 containerd[1431]: time="2024-09-04T17:13:30.427339141Z" level=info msg="TearDown network for sandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\" successfully" Sep 4 17:13:30.457134 containerd[1431]: time="2024-09-04T17:13:30.457065425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:30.457305 containerd[1431]: time="2024-09-04T17:13:30.457166025Z" level=info msg="RemovePodSandbox \"66b4e1c9c57b360142cd16b205b86e459d86e1673af4730dfd0134f028a00ed5\" returns successfully" Sep 4 17:13:30.458296 containerd[1431]: time="2024-09-04T17:13:30.458261745Z" level=info msg="StopPodSandbox for \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\"" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.500 [WARNING][5220] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce4d0287-3bcd-46a6-ab21-8867f52fec21", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e", Pod:"coredns-7db6d8ff4d-mhfkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b1629fe8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.501 [INFO][5220] k8s.go 608: Cleaning up netns ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.501 [INFO][5220] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" iface="eth0" netns="" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.501 [INFO][5220] k8s.go 615: Releasing IP address(es) ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.501 [INFO][5220] utils.go 188: Calico CNI releasing IP address ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.529 [INFO][5227] ipam_plugin.go 417: Releasing address using handleID ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.529 [INFO][5227] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.529 [INFO][5227] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.537 [WARNING][5227] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.538 [INFO][5227] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.539 [INFO][5227] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.543584 containerd[1431]: 2024-09-04 17:13:30.541 [INFO][5220] k8s.go 621: Teardown processing complete. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.543584 containerd[1431]: time="2024-09-04T17:13:30.543488237Z" level=info msg="TearDown network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\" successfully" Sep 4 17:13:30.543584 containerd[1431]: time="2024-09-04T17:13:30.543514317Z" level=info msg="StopPodSandbox for \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\" returns successfully" Sep 4 17:13:30.545069 containerd[1431]: time="2024-09-04T17:13:30.544058397Z" level=info msg="RemovePodSandbox for \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\"" Sep 4 17:13:30.545069 containerd[1431]: time="2024-09-04T17:13:30.544096037Z" level=info msg="Forcibly stopping sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\"" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.587 [WARNING][5249] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce4d0287-3bcd-46a6-ab21-8867f52fec21", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5b7e5efcb5da7932e60f56cedac9af1c67818c0863a965627e1b79165bd0ca9e", Pod:"coredns-7db6d8ff4d-mhfkm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95b1629fe8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.587 [INFO][5249] k8s.go 608: Cleaning up netns ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.587 [INFO][5249] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" iface="eth0" netns="" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.587 [INFO][5249] k8s.go 615: Releasing IP address(es) ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.587 [INFO][5249] utils.go 188: Calico CNI releasing IP address ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.619 [INFO][5256] ipam_plugin.go 417: Releasing address using handleID ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.619 [INFO][5256] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.619 [INFO][5256] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.628 [WARNING][5256] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.628 [INFO][5256] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" HandleID="k8s-pod-network.2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Workload="localhost-k8s-coredns--7db6d8ff4d--mhfkm-eth0" Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.630 [INFO][5256] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.634118 containerd[1431]: 2024-09-04 17:13:30.632 [INFO][5249] k8s.go 621: Teardown processing complete. ContainerID="2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790" Sep 4 17:13:30.634675 containerd[1431]: time="2024-09-04T17:13:30.634161810Z" level=info msg="TearDown network for sandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\" successfully" Sep 4 17:13:30.691663 containerd[1431]: time="2024-09-04T17:13:30.691588857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:30.691797 containerd[1431]: time="2024-09-04T17:13:30.691674097Z" level=info msg="RemovePodSandbox \"2c6e79822743fa00603b49c91f5130096d47095734c3211afa559a8510741790\" returns successfully" Sep 4 17:13:30.692259 containerd[1431]: time="2024-09-04T17:13:30.692174257Z" level=info msg="StopPodSandbox for \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\"" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.733 [WARNING][5279] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878", Pod:"coredns-7db6d8ff4d-vzc8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81ad30d86ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.734 [INFO][5279] k8s.go 608: Cleaning up netns ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.734 [INFO][5279] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" iface="eth0" netns="" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.734 [INFO][5279] k8s.go 615: Releasing IP address(es) ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.734 [INFO][5279] utils.go 188: Calico CNI releasing IP address ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.757 [INFO][5286] ipam_plugin.go 417: Releasing address using handleID ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.757 [INFO][5286] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.757 [INFO][5286] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.766 [WARNING][5286] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.766 [INFO][5286] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.768 [INFO][5286] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.772249 containerd[1431]: 2024-09-04 17:13:30.770 [INFO][5279] k8s.go 621: Teardown processing complete. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.772249 containerd[1431]: time="2024-09-04T17:13:30.772244508Z" level=info msg="TearDown network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\" successfully" Sep 4 17:13:30.772703 containerd[1431]: time="2024-09-04T17:13:30.772272668Z" level=info msg="StopPodSandbox for \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\" returns successfully" Sep 4 17:13:30.772872 containerd[1431]: time="2024-09-04T17:13:30.772782148Z" level=info msg="RemovePodSandbox for \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\"" Sep 4 17:13:30.772872 containerd[1431]: time="2024-09-04T17:13:30.772818268Z" level=info msg="Forcibly stopping sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\"" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.814 [WARNING][5308] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f667e6cc-a435-4be3-9a6f-98b7f2fbb1a8", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8b88e610b6c2a28f7ed9d0b31607a0ec18ee8f5eca7e1307a187d4045c767878", Pod:"coredns-7db6d8ff4d-vzc8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81ad30d86ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.814 [INFO][5308] k8s.go 608: Cleaning up netns ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.814 [INFO][5308] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" iface="eth0" netns="" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.814 [INFO][5308] k8s.go 615: Releasing IP address(es) ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.814 [INFO][5308] utils.go 188: Calico CNI releasing IP address ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.841 [INFO][5314] ipam_plugin.go 417: Releasing address using handleID ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.842 [INFO][5314] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.842 [INFO][5314] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.853 [WARNING][5314] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.853 [INFO][5314] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" HandleID="k8s-pod-network.f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Workload="localhost-k8s-coredns--7db6d8ff4d--vzc8x-eth0" Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.856 [INFO][5314] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.861475 containerd[1431]: 2024-09-04 17:13:30.858 [INFO][5308] k8s.go 621: Teardown processing complete. ContainerID="f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3" Sep 4 17:13:30.861907 containerd[1431]: time="2024-09-04T17:13:30.861507561Z" level=info msg="TearDown network for sandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\" successfully" Sep 4 17:13:30.867183 containerd[1431]: time="2024-09-04T17:13:30.867120961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:30.867316 containerd[1431]: time="2024-09-04T17:13:30.867236841Z" level=info msg="RemovePodSandbox \"f63841df3eddfe3042ba8aea6da4189ff19032dd231cd2c52ac36e6d145f45d3\" returns successfully" Sep 4 17:13:30.867807 containerd[1431]: time="2024-09-04T17:13:30.867777881Z" level=info msg="StopPodSandbox for \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\"" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.908 [WARNING][5335] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sjnkk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493", Pod:"csi-node-driver-sjnkk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie4916c28f78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.908 [INFO][5335] k8s.go 608: Cleaning up netns ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.908 [INFO][5335] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" iface="eth0" netns="" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.908 [INFO][5335] k8s.go 615: Releasing IP address(es) ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.908 [INFO][5335] utils.go 188: Calico CNI releasing IP address ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.932 [INFO][5343] ipam_plugin.go 417: Releasing address using handleID ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.932 [INFO][5343] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.932 [INFO][5343] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.941 [WARNING][5343] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.941 [INFO][5343] ipam_plugin.go 445: Releasing address using workloadID ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.944 [INFO][5343] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.948491 containerd[1431]: 2024-09-04 17:13:30.946 [INFO][5335] k8s.go 621: Teardown processing complete. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:30.949916 containerd[1431]: time="2024-09-04T17:13:30.948536333Z" level=info msg="TearDown network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\" successfully" Sep 4 17:13:30.949916 containerd[1431]: time="2024-09-04T17:13:30.948563653Z" level=info msg="StopPodSandbox for \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\" returns successfully" Sep 4 17:13:30.949916 containerd[1431]: time="2024-09-04T17:13:30.949080613Z" level=info msg="RemovePodSandbox for \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\"" Sep 4 17:13:30.949916 containerd[1431]: time="2024-09-04T17:13:30.949114293Z" level=info msg="Forcibly stopping sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\"" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:30.990 [WARNING][5365] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sjnkk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dbe20c7a-4d25-4a1c-ab36-3d1bda88df08", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01e17db12b6a3b815f892082544e151ac9f3668aebafc7df2951c4616cb29493", Pod:"csi-node-driver-sjnkk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie4916c28f78", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:30.991 [INFO][5365] k8s.go 608: Cleaning up netns ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:30.991 [INFO][5365] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" iface="eth0" netns="" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:30.991 [INFO][5365] k8s.go 615: Releasing IP address(es) ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:30.991 [INFO][5365] utils.go 188: Calico CNI releasing IP address ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:31.019 [INFO][5372] ipam_plugin.go 417: Releasing address using handleID ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:31.020 [INFO][5372] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:31.020 [INFO][5372] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:31.029 [WARNING][5372] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:31.030 [INFO][5372] ipam_plugin.go 445: Releasing address using workloadID ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" HandleID="k8s-pod-network.91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Workload="localhost-k8s-csi--node--driver--sjnkk-eth0" Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:31.032 [INFO][5372] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:31.038367 containerd[1431]: 2024-09-04 17:13:31.036 [INFO][5365] k8s.go 621: Teardown processing complete. ContainerID="91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e" Sep 4 17:13:31.039495 containerd[1431]: time="2024-09-04T17:13:31.038526145Z" level=info msg="TearDown network for sandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\" successfully" Sep 4 17:13:31.044279 containerd[1431]: time="2024-09-04T17:13:31.044217425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:31.044395 containerd[1431]: time="2024-09-04T17:13:31.044299505Z" level=info msg="RemovePodSandbox \"91d6bcad7160856f10a166a55a71dbbea1b2f91d9f036b3debdb1f5426a75f4e\" returns successfully" Sep 4 17:13:34.909535 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:50222.service - OpenSSH per-connection server daemon (10.0.0.1:50222). Sep 4 17:13:34.948449 sshd[5393]: Accepted publickey for core from 10.0.0.1 port 50222 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:34.949474 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:34.953413 systemd-logind[1417]: New session 19 of user core. Sep 4 17:13:34.961425 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:13:35.131408 sshd[5393]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:35.135590 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:50222.service: Deactivated successfully. Sep 4 17:13:35.137492 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:13:35.138093 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:13:35.139158 systemd-logind[1417]: Removed session 19. Sep 4 17:13:40.142025 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:50228.service - OpenSSH per-connection server daemon (10.0.0.1:50228). Sep 4 17:13:40.199160 sshd[5429]: Accepted publickey for core from 10.0.0.1 port 50228 ssh2: RSA SHA256:bZG4GDjjdFyRf+7zZQ8+tZsxmoYB2474ukIYHB/jTWk Sep 4 17:13:40.201783 sshd[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:13:40.208303 systemd-logind[1417]: New session 20 of user core. Sep 4 17:13:40.220421 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:13:40.371853 sshd[5429]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:40.375681 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:50228.service: Deactivated successfully. Sep 4 17:13:40.377899 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:13:40.378845 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:13:40.379933 systemd-logind[1417]: Removed session 20.