Sep 4 17:32:10.915659 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:32:10.915680 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:32:10.915690 kernel: KASLR enabled Sep 4 17:32:10.915696 kernel: efi: EFI v2.7 by EDK II Sep 4 17:32:10.915702 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb8fd018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 4 17:32:10.915708 kernel: random: crng init done Sep 4 17:32:10.915715 kernel: ACPI: Early table checksum verification disabled Sep 4 17:32:10.915721 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 4 17:32:10.915727 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 4 17:32:10.915735 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915741 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915747 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915753 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915760 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915767 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915782 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915789 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915795 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 4 17:32:10.915802 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 4 17:32:10.915808 kernel: NUMA: Failed to initialise from firmware Sep 4 17:32:10.915815 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:32:10.915822 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 4 17:32:10.915828 kernel: Zone ranges: Sep 4 17:32:10.915835 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:32:10.915841 kernel: DMA32 empty Sep 4 17:32:10.915849 kernel: Normal empty Sep 4 17:32:10.915855 kernel: Movable zone start for each node Sep 4 17:32:10.915861 kernel: Early memory node ranges Sep 4 17:32:10.915868 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 4 17:32:10.915874 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 4 17:32:10.915881 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 4 17:32:10.915887 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 4 17:32:10.915893 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 4 17:32:10.915900 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 4 17:32:10.915906 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 4 17:32:10.915912 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 4 17:32:10.915919 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 4 17:32:10.915927 kernel: psci: probing for conduit method from ACPI. Sep 4 17:32:10.915933 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:32:10.915939 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:32:10.915948 kernel: psci: Trusted OS migration not required Sep 4 17:32:10.915955 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:32:10.915962 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 4 17:32:10.915970 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:32:10.915977 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:32:10.915985 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 4 17:32:10.915991 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:32:10.915998 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:32:10.916005 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:32:10.916012 kernel: CPU features: detected: Spectre-v4 Sep 4 17:32:10.916019 kernel: CPU features: detected: Spectre-BHB Sep 4 17:32:10.916026 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:32:10.916033 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:32:10.916041 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:32:10.916048 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:32:10.916055 kernel: alternatives: applying boot alternatives Sep 4 17:32:10.916063 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:32:10.916070 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:32:10.916077 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:32:10.916084 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:32:10.916090 kernel: Fallback order for Node 0: 0 Sep 4 17:32:10.916097 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 4 17:32:10.916104 kernel: Policy zone: DMA Sep 4 17:32:10.916111 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:32:10.916119 kernel: software IO TLB: area num 4. Sep 4 17:32:10.916126 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 4 17:32:10.916133 kernel: Memory: 2386852K/2572288K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 185436K reserved, 0K cma-reserved) Sep 4 17:32:10.916141 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 4 17:32:10.916148 kernel: trace event string verifier disabled Sep 4 17:32:10.916155 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:32:10.916162 kernel: rcu: RCU event tracing is enabled. Sep 4 17:32:10.916169 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 4 17:32:10.916176 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:32:10.916183 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:32:10.916190 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:32:10.916197 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 4 17:32:10.916206 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:32:10.916213 kernel: GICv3: 256 SPIs implemented Sep 4 17:32:10.916220 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:32:10.916226 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:32:10.916233 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:32:10.916240 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 4 17:32:10.916247 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 4 17:32:10.916254 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:32:10.916261 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:32:10.916268 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 4 17:32:10.916275 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 4 17:32:10.916283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:32:10.916290 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:32:10.916297 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:32:10.916304 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:32:10.916310 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:32:10.916317 kernel: arm-pv: using stolen time PV Sep 4 17:32:10.916324 kernel: Console: colour dummy device 80x25 Sep 4 17:32:10.916331 kernel: ACPI: Core revision 20230628 Sep 4 17:32:10.916338 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:32:10.916345 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:32:10.916354 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:32:10.916361 kernel: SELinux: Initializing. Sep 4 17:32:10.916368 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:32:10.916376 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:32:10.916383 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:32:10.916391 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:32:10.916398 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:32:10.916405 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:32:10.916412 kernel: Platform MSI: ITS@0x8080000 domain created Sep 4 17:32:10.916420 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 4 17:32:10.916427 kernel: Remapping and enabling EFI services. Sep 4 17:32:10.916434 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:32:10.916441 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:32:10.916448 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 4 17:32:10.916455 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 4 17:32:10.916462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:32:10.916469 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:32:10.916476 kernel: Detected PIPT I-cache on CPU2 Sep 4 17:32:10.916484 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 4 17:32:10.916492 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 4 17:32:10.916499 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:32:10.916511 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 4 17:32:10.916520 kernel: Detected PIPT I-cache on CPU3 Sep 4 17:32:10.916528 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 4 17:32:10.916535 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 4 17:32:10.916542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:32:10.916549 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 4 17:32:10.916557 kernel: smp: Brought up 1 node, 4 CPUs Sep 4 17:32:10.916566 kernel: SMP: Total of 4 processors activated. Sep 4 17:32:10.916582 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:32:10.916590 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:32:10.916597 kernel: CPU features: detected: Common not Private translations Sep 4 17:32:10.916605 kernel: CPU features: detected: CRC32 instructions Sep 4 17:32:10.916612 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 4 17:32:10.916619 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:32:10.916627 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:32:10.916636 kernel: CPU features: detected: Privileged Access Never Sep 4 17:32:10.916643 kernel: CPU features: detected: RAS Extension Support Sep 4 17:32:10.916650 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 4 17:32:10.916658 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:32:10.916665 kernel: alternatives: applying system-wide alternatives Sep 4 17:32:10.916672 kernel: devtmpfs: initialized Sep 4 17:32:10.916680 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:32:10.916687 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 4 17:32:10.916695 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:32:10.916704 kernel: SMBIOS 3.0.0 present. Sep 4 17:32:10.916711 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 4 17:32:10.916718 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:32:10.916726 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:32:10.916734 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:32:10.916741 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:32:10.916749 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:32:10.916756 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 4 17:32:10.916764 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:32:10.916776 kernel: cpuidle: using governor menu Sep 4 17:32:10.916784 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:32:10.916791 kernel: ASID allocator initialised with 32768 entries Sep 4 17:32:10.916799 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:32:10.916806 kernel: Serial: AMBA PL011 UART driver Sep 4 17:32:10.916813 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:32:10.916821 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:32:10.916828 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:32:10.916835 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:32:10.916845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:32:10.916853 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:32:10.916860 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:32:10.916868 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:32:10.916875 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:32:10.916883 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:32:10.916890 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:32:10.916898 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:32:10.916905 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:32:10.916914 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:32:10.916921 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:32:10.916929 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:32:10.916936 kernel: ACPI: Interpreter enabled Sep 4 17:32:10.916944 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:32:10.916951 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:32:10.916958 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:32:10.916966 kernel: printk: console [ttyAMA0] enabled Sep 4 17:32:10.916973 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 4 17:32:10.917122 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:32:10.917198 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:32:10.917266 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:32:10.917333 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 4 17:32:10.917398 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 4 17:32:10.917408 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 4 17:32:10.917415 kernel: PCI host bridge to bus 0000:00 Sep 4 17:32:10.917491 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 4 17:32:10.917554 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:32:10.917679 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 4 17:32:10.917741 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 4 17:32:10.917835 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 4 17:32:10.917917 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 4 17:32:10.917991 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 4 17:32:10.918071 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 4 17:32:10.918150 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:32:10.918225 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 4 17:32:10.918293 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 4 17:32:10.918361 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 4 17:32:10.918421 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 4 17:32:10.918480 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:32:10.918544 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 4 17:32:10.918554 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:32:10.918562 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:32:10.918569 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:32:10.918588 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:32:10.918595 kernel: iommu: Default domain type: Translated Sep 4 17:32:10.918603 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:32:10.918611 kernel: efivars: Registered efivars operations Sep 4 17:32:10.918621 kernel: vgaarb: loaded Sep 4 17:32:10.918628 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:32:10.918636 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:32:10.918643 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:32:10.918651 kernel: pnp: PnP ACPI init Sep 4 17:32:10.918732 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 4 17:32:10.918743 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:32:10.918750 kernel: NET: Registered PF_INET protocol family Sep 4 17:32:10.918760 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:32:10.918774 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:32:10.918783 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:32:10.918790 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:32:10.918798 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:32:10.918806 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:32:10.918814 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:32:10.918821 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:32:10.918829 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:32:10.918838 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:32:10.918846 kernel: kvm [1]: HYP mode not available Sep 4 17:32:10.918853 kernel: Initialise system trusted keyrings Sep 4 17:32:10.918860 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:32:10.918868 kernel: Key type asymmetric registered Sep 4 17:32:10.918875 kernel: Asymmetric key parser 'x509' registered Sep 4 17:32:10.918883 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:32:10.918890 kernel: io scheduler mq-deadline registered Sep 4 17:32:10.918897 kernel: io scheduler kyber registered Sep 4 17:32:10.918906 kernel: io scheduler bfq registered Sep 4 17:32:10.918914 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:32:10.918921 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:32:10.918929 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:32:10.919003 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 4 17:32:10.919013 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:32:10.919025 kernel: thunder_xcv, ver 1.0 Sep 4 17:32:10.919033 kernel: thunder_bgx, ver 1.0 Sep 4 17:32:10.919040 kernel: nicpf, ver 1.0 Sep 4 17:32:10.919049 kernel: nicvf, ver 1.0 Sep 4 17:32:10.919124 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:32:10.919191 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:32:10 UTC (1725471130) Sep 4 17:32:10.919201 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:32:10.919208 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 4 17:32:10.919216 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:32:10.919223 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:32:10.919231 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:32:10.919240 kernel: Segment Routing with IPv6 Sep 4 17:32:10.919248 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:32:10.919255 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:32:10.919263 kernel: Key type dns_resolver registered Sep 4 17:32:10.919270 kernel: registered taskstats version 1 Sep 4 17:32:10.919278 kernel: Loading compiled-in X.509 certificates Sep 4 17:32:10.919285 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:32:10.919292 kernel: Key type .fscrypt registered Sep 4 17:32:10.919300 kernel: Key type fscrypt-provisioning registered Sep 4 17:32:10.919309 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:32:10.919316 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:32:10.919323 kernel: ima: No architecture policies found Sep 4 17:32:10.919331 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:32:10.919338 kernel: clk: Disabling unused clocks Sep 4 17:32:10.919345 kernel: Freeing unused kernel memory: 39040K Sep 4 17:32:10.919352 kernel: Run /init as init process Sep 4 17:32:10.919360 kernel: with arguments: Sep 4 17:32:10.919367 kernel: /init Sep 4 17:32:10.919375 kernel: with environment: Sep 4 17:32:10.919383 kernel: HOME=/ Sep 4 17:32:10.919390 kernel: TERM=linux Sep 4 17:32:10.919398 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:32:10.919407 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:32:10.919417 systemd[1]: Detected virtualization kvm. Sep 4 17:32:10.919425 systemd[1]: Detected architecture arm64. Sep 4 17:32:10.919433 systemd[1]: Running in initrd. Sep 4 17:32:10.919442 systemd[1]: No hostname configured, using default hostname. Sep 4 17:32:10.919450 systemd[1]: Hostname set to . Sep 4 17:32:10.919458 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:32:10.919466 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:32:10.919474 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:32:10.919482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:32:10.919491 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:32:10.919499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:32:10.919508 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:32:10.919517 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:32:10.919527 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:32:10.919535 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:32:10.919543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:32:10.919551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:32:10.919559 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:32:10.919569 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:32:10.919665 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:32:10.919673 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:32:10.919681 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:32:10.919690 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:32:10.919698 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:32:10.919706 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:32:10.919714 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:32:10.919725 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:32:10.919734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:32:10.919742 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:32:10.919750 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:32:10.919758 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:32:10.919767 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:32:10.919781 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:32:10.919789 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:32:10.919798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:32:10.919808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:10.919816 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:32:10.919825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:32:10.919833 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:32:10.919842 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:32:10.919872 systemd-journald[238]: Collecting audit messages is disabled. Sep 4 17:32:10.919892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:10.919901 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:32:10.919911 systemd-journald[238]: Journal started Sep 4 17:32:10.919930 systemd-journald[238]: Runtime Journal (/run/log/journal/00ed91cdd8704a1c9c760041e1baf20f) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:32:10.911271 systemd-modules-load[239]: Inserted module 'overlay' Sep 4 17:32:10.924601 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:32:10.924911 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:32:10.928515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:32:10.930753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:32:10.930752 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:32:10.932938 kernel: Bridge firewalling registered Sep 4 17:32:10.932004 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 4 17:32:10.933145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:32:10.935653 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:32:10.942328 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:32:10.944493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:32:10.947920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:32:10.956759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:32:10.959083 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:10.962668 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:32:10.977621 dracut-cmdline[277]: dracut-dracut-053 Sep 4 17:32:10.980678 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:32:10.983585 systemd-resolved[274]: Positive Trust Anchors: Sep 4 17:32:10.983596 systemd-resolved[274]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:32:10.983630 systemd-resolved[274]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:32:10.988778 systemd-resolved[274]: Defaulting to hostname 'linux'. Sep 4 17:32:10.989891 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:32:10.993786 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:32:11.049616 kernel: SCSI subsystem initialized Sep 4 17:32:11.054596 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:32:11.061596 kernel: iscsi: registered transport (tcp) Sep 4 17:32:11.074917 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:32:11.074978 kernel: QLogic iSCSI HBA Driver Sep 4 17:32:11.118613 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:32:11.125766 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:32:11.142732 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:32:11.142803 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:32:11.142815 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:32:11.192611 kernel: raid6: neonx8 gen() 15754 MB/s Sep 4 17:32:11.209593 kernel: raid6: neonx4 gen() 15647 MB/s Sep 4 17:32:11.226593 kernel: raid6: neonx2 gen() 13274 MB/s Sep 4 17:32:11.243591 kernel: raid6: neonx1 gen() 10450 MB/s Sep 4 17:32:11.260590 kernel: raid6: int64x8 gen() 6960 MB/s Sep 4 17:32:11.277601 kernel: raid6: int64x4 gen() 7341 MB/s Sep 4 17:32:11.294593 kernel: raid6: int64x2 gen() 6128 MB/s Sep 4 17:32:11.311591 kernel: raid6: int64x1 gen() 5055 MB/s Sep 4 17:32:11.311611 kernel: raid6: using algorithm neonx8 gen() 15754 MB/s Sep 4 17:32:11.328594 kernel: raid6: .... xor() 11927 MB/s, rmw enabled Sep 4 17:32:11.328607 kernel: raid6: using neon recovery algorithm Sep 4 17:32:11.333595 kernel: xor: measuring software checksum speed Sep 4 17:32:11.334591 kernel: 8regs : 19859 MB/sec Sep 4 17:32:11.335585 kernel: 32regs : 19682 MB/sec Sep 4 17:32:11.336590 kernel: arm64_neon : 27027 MB/sec Sep 4 17:32:11.336602 kernel: xor: using function: arm64_neon (27027 MB/sec) Sep 4 17:32:11.397604 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:32:11.411402 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:32:11.421792 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:32:11.434132 systemd-udevd[459]: Using default interface naming scheme 'v255'. Sep 4 17:32:11.437505 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:32:11.445741 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:32:11.460085 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 4 17:32:11.490984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:32:11.506857 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:32:11.553608 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:32:11.578832 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:32:11.592620 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:32:11.593789 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:32:11.596447 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:32:11.598900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:32:11.608746 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:32:11.621200 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 4 17:32:11.621360 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 4 17:32:11.622330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:32:11.627317 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:32:11.627422 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:11.636562 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:32:11.636599 kernel: GPT:9289727 != 19775487 Sep 4 17:32:11.636610 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:32:11.636620 kernel: GPT:9289727 != 19775487 Sep 4 17:32:11.636629 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:32:11.636638 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:32:11.636000 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:32:11.637251 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:32:11.637416 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:11.640702 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:11.650962 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:11.659142 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (509) Sep 4 17:32:11.660590 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (515) Sep 4 17:32:11.666203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:11.675342 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 4 17:32:11.680540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 4 17:32:11.685325 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:32:11.689628 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 4 17:32:11.690500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 4 17:32:11.702736 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:32:11.704584 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:32:11.711479 disk-uuid[550]: Primary Header is updated. Sep 4 17:32:11.711479 disk-uuid[550]: Secondary Entries is updated. Sep 4 17:32:11.711479 disk-uuid[550]: Secondary Header is updated. Sep 4 17:32:11.714598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:32:11.732066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:12.729607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 4 17:32:12.731659 disk-uuid[552]: The operation has completed successfully. Sep 4 17:32:12.768171 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:32:12.768269 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:32:12.792770 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:32:12.795957 sh[575]: Success Sep 4 17:32:12.815698 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:32:12.850437 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:32:12.860040 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:32:12.864337 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:32:12.875603 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:32:12.875653 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:32:12.875665 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:32:12.877032 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:32:12.877050 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:32:12.886290 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:32:12.887728 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:32:12.899781 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:32:12.904088 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:32:12.916513 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:32:12.916588 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:32:12.916607 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:32:12.921517 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:32:12.929835 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:32:12.931610 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:32:12.940728 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:32:12.947780 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:32:13.014677 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:32:13.033452 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:32:13.054001 systemd-networkd[764]: lo: Link UP Sep 4 17:32:13.057030 systemd-networkd[764]: lo: Gained carrier Sep 4 17:32:13.057936 systemd-networkd[764]: Enumeration completed Sep 4 17:32:13.058068 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:32:13.058467 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:13.058471 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:32:13.059280 systemd-networkd[764]: eth0: Link UP Sep 4 17:32:13.059283 systemd-networkd[764]: eth0: Gained carrier Sep 4 17:32:13.059291 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:13.059768 systemd[1]: Reached target network.target - Network. Sep 4 17:32:13.078706 ignition[677]: Ignition 2.18.0 Sep 4 17:32:13.078718 ignition[677]: Stage: fetch-offline Sep 4 17:32:13.078759 ignition[677]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:13.078776 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:13.080632 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:32:13.078873 ignition[677]: parsed url from cmdline: "" Sep 4 17:32:13.078876 ignition[677]: no config URL provided Sep 4 17:32:13.078881 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:32:13.078889 ignition[677]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:32:13.078925 ignition[677]: op(1): [started] loading QEMU firmware config module Sep 4 17:32:13.078930 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 4 17:32:13.086494 ignition[677]: op(1): [finished] loading QEMU firmware config module Sep 4 17:32:13.127946 ignition[677]: parsing config with SHA512: 659af48af6dddcff850e879a0e6424af1b373624a4899d329050f21ef5b6ad9fb505e468a5e7551513eec33ffc021d3bcdc8887c7a420679fb187bb0fb5b2339 Sep 4 17:32:13.134232 unknown[677]: fetched base config from "system" Sep 4 17:32:13.134244 unknown[677]: fetched user config from "qemu" Sep 4 17:32:13.134742 ignition[677]: fetch-offline: fetch-offline passed Sep 4 17:32:13.136522 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:32:13.134827 ignition[677]: Ignition finished successfully Sep 4 17:32:13.138093 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 4 17:32:13.150826 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:32:13.163499 ignition[771]: Ignition 2.18.0 Sep 4 17:32:13.163510 ignition[771]: Stage: kargs Sep 4 17:32:13.163689 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:13.163699 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:13.167277 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:32:13.164545 ignition[771]: kargs: kargs passed Sep 4 17:32:13.164613 ignition[771]: Ignition finished successfully Sep 4 17:32:13.176857 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:32:13.190015 ignition[780]: Ignition 2.18.0 Sep 4 17:32:13.190026 ignition[780]: Stage: disks Sep 4 17:32:13.190204 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:13.192938 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:32:13.190214 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:13.193979 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:32:13.191102 ignition[780]: disks: disks passed Sep 4 17:32:13.195450 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:32:13.191152 ignition[780]: Ignition finished successfully Sep 4 17:32:13.197414 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:32:13.199052 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:32:13.200585 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:32:13.212734 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:32:13.224501 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:32:13.230459 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:32:13.232943 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:32:13.285605 kernel: EXT4-fs (vda9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:32:13.285901 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:32:13.287202 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:32:13.301717 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:32:13.304022 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:32:13.304955 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:32:13.305002 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:32:13.305029 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:32:13.311669 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:32:13.314173 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:32:13.318592 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Sep 4 17:32:13.320908 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:32:13.320941 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:32:13.320960 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:32:13.323637 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:32:13.325642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:32:13.380773 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:32:13.385050 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:32:13.389086 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:32:13.392897 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:32:13.466591 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:32:13.474706 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:32:13.476235 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:32:13.482592 kernel: BTRFS info (device vda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:32:13.486175 systemd-resolved[274]: Detected conflict on linux IN A 10.0.0.119 Sep 4 17:32:13.486187 systemd-resolved[274]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Sep 4 17:32:13.498012 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:32:13.500306 ignition[912]: INFO : Ignition 2.18.0 Sep 4 17:32:13.500306 ignition[912]: INFO : Stage: mount Sep 4 17:32:13.501898 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:13.501898 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:13.501898 ignition[912]: INFO : mount: mount passed Sep 4 17:32:13.501898 ignition[912]: INFO : Ignition finished successfully Sep 4 17:32:13.503199 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:32:13.508696 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:32:13.874910 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:32:13.884786 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:32:13.890606 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) Sep 4 17:32:13.895013 kernel: BTRFS info (device vda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:32:13.895045 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:32:13.895724 kernel: BTRFS info (device vda6): using free space tree Sep 4 17:32:13.898598 kernel: BTRFS info (device vda6): auto enabling async discard Sep 4 17:32:13.899268 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:32:13.917349 ignition[942]: INFO : Ignition 2.18.0 Sep 4 17:32:13.917349 ignition[942]: INFO : Stage: files Sep 4 17:32:13.918649 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:13.918649 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:13.918649 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:32:13.922039 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:32:13.922039 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:32:13.925530 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:32:13.926991 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:32:13.926991 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:32:13.926113 unknown[942]: wrote ssh authorized keys file for user: core Sep 4 17:32:13.930329 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:32:13.930329 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:32:13.989900 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:32:14.029467 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:32:14.031651 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Sep 4 17:32:14.352169 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:32:14.617599 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:32:14.617599 ignition[942]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:32:14.621286 ignition[942]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:32:14.623100 ignition[942]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:32:14.623100 ignition[942]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:32:14.623100 ignition[942]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 4 17:32:14.623100 ignition[942]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:32:14.623100 ignition[942]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 4 17:32:14.623100 ignition[942]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 4 17:32:14.623100 ignition[942]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 4 17:32:14.642739 systemd-networkd[764]: eth0: Gained IPv6LL Sep 4 17:32:14.678190 ignition[942]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:32:14.682459 ignition[942]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 4 17:32:14.685101 ignition[942]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 4 17:32:14.685101 ignition[942]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:32:14.685101 ignition[942]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:32:14.685101 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:32:14.685101 ignition[942]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:32:14.685101 ignition[942]: INFO : files: files passed Sep 4 17:32:14.685101 ignition[942]: INFO : Ignition finished successfully Sep 4 17:32:14.685583 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:32:14.696730 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:32:14.698368 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:32:14.702929 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:32:14.703061 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:32:14.708352 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Sep 4 17:32:14.710794 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:32:14.710794 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:32:14.714341 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:32:14.715196 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:32:14.716890 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:32:14.722773 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:32:14.747194 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:32:14.747321 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:32:14.749317 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:32:14.750343 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:32:14.751299 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:32:14.752156 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:32:14.770661 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:32:14.781792 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:32:14.790555 systemd[1]: Stopped target network.target - Network. Sep 4 17:32:14.791362 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:32:14.792953 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:32:14.794948 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:32:14.796417 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:32:14.796584 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:32:14.798434 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:32:14.800554 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:32:14.801939 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:32:14.803404 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:32:14.805197 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:32:14.808262 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:32:14.810284 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:32:14.811713 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:32:14.813313 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:32:14.814645 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:32:14.816234 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:32:14.816363 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:32:14.818423 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:32:14.820463 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:32:14.822179 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:32:14.826654 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:32:14.827600 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:32:14.827729 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:32:14.830144 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:32:14.830265 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:32:14.832092 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:32:14.833516 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:32:14.834457 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:32:14.836429 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:32:14.838095 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:32:14.840154 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:32:14.840285 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:32:14.841814 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:32:14.841939 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:32:14.843587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:32:14.843764 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:32:14.845397 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:32:14.845674 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:32:14.861861 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:32:14.863554 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:32:14.863747 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:32:14.869846 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:32:14.870870 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:32:14.872789 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:32:14.874563 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:32:14.874614 systemd-networkd[764]: eth0: DHCPv6 lease lost Sep 4 17:32:14.881737 ignition[998]: INFO : Ignition 2.18.0 Sep 4 17:32:14.881737 ignition[998]: INFO : Stage: umount Sep 4 17:32:14.881737 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:32:14.881737 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 4 17:32:14.874728 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:32:14.895627 ignition[998]: INFO : umount: umount passed Sep 4 17:32:14.895627 ignition[998]: INFO : Ignition finished successfully Sep 4 17:32:14.878469 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:32:14.880191 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:32:14.889762 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:32:14.889871 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:32:14.892084 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:32:14.892167 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:32:14.894987 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:32:14.895076 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:32:14.899606 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:32:14.901857 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:32:14.902199 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:32:14.904268 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:32:14.904322 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:32:14.906523 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:32:14.906628 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:32:14.908512 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:32:14.908557 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:32:14.910473 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:32:14.910520 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:32:14.916701 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:32:14.917497 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:32:14.917550 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:32:14.919705 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:32:14.919750 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:32:14.921532 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:32:14.921594 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:32:14.923614 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:32:14.923665 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:32:14.925965 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:32:14.928700 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:32:14.928799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:32:14.939709 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:32:14.939819 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:32:14.942318 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:32:14.942459 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:32:14.945320 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:32:14.945359 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:32:14.946845 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:32:14.946878 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:32:14.948688 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:32:14.948746 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:32:14.951210 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:32:14.951257 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:32:14.954154 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:32:14.954200 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:32:14.962728 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:32:14.964408 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:32:14.964467 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:32:14.966418 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:32:14.966461 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:14.968835 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:32:14.969599 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:32:14.970466 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:32:14.970544 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:32:14.973072 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:32:14.974337 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:32:14.974401 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:32:14.977080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:32:14.986990 systemd[1]: Switching root. Sep 4 17:32:15.019137 systemd-journald[238]: Journal stopped Sep 4 17:32:15.733030 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 4 17:32:15.733087 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:32:15.733099 kernel: SELinux: policy capability open_perms=1 Sep 4 17:32:15.733109 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:32:15.733120 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:32:15.733133 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:32:15.733144 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:32:15.733154 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:32:15.733164 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:32:15.733174 kernel: audit: type=1403 audit(1725471135.162:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:32:15.733185 systemd[1]: Successfully loaded SELinux policy in 32.546ms. Sep 4 17:32:15.733213 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.119ms. Sep 4 17:32:15.733230 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:32:15.733245 systemd[1]: Detected virtualization kvm. Sep 4 17:32:15.733258 systemd[1]: Detected architecture arm64. Sep 4 17:32:15.733269 systemd[1]: Detected first boot. Sep 4 17:32:15.733280 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:32:15.733292 zram_generator::config[1042]: No configuration found. Sep 4 17:32:15.733304 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:32:15.733315 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:32:15.733326 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:32:15.733337 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:32:15.733351 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:32:15.733362 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:32:15.733373 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:32:15.733385 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:32:15.733398 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:32:15.733415 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:32:15.733426 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:32:15.733437 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:32:15.733448 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:32:15.733462 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:32:15.733474 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:32:15.733485 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:32:15.733496 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:32:15.733508 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:32:15.733520 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:32:15.733531 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:32:15.733543 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:32:15.733557 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:32:15.733625 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:32:15.733641 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:32:15.733653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:32:15.733665 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:32:15.733675 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:32:15.733686 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:32:15.733697 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:32:15.733711 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:32:15.733722 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:32:15.733733 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:32:15.733745 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:32:15.733762 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:32:15.733820 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:32:15.733835 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:32:15.733846 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:32:15.733858 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:32:15.733872 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:32:15.733885 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:32:15.733898 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:32:15.733911 systemd[1]: Reached target machines.target - Containers. Sep 4 17:32:15.733923 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:32:15.733935 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:32:15.733947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:32:15.733958 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:32:15.733969 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:32:15.733982 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:32:15.733994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:32:15.734005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:32:15.734019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:32:15.734030 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:32:15.734041 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:32:15.734052 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:32:15.734063 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:32:15.734076 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:32:15.734086 kernel: fuse: init (API version 7.39) Sep 4 17:32:15.734097 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:32:15.734107 kernel: loop: module loaded Sep 4 17:32:15.734118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:32:15.734128 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:32:15.734140 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:32:15.734150 kernel: ACPI: bus type drm_connector registered Sep 4 17:32:15.734160 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:32:15.734172 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:32:15.734184 systemd[1]: Stopped verity-setup.service. Sep 4 17:32:15.734195 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:32:15.734206 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:32:15.734217 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:32:15.734228 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:32:15.734260 systemd-journald[1102]: Collecting audit messages is disabled. Sep 4 17:32:15.734286 systemd-journald[1102]: Journal started Sep 4 17:32:15.734308 systemd-journald[1102]: Runtime Journal (/run/log/journal/00ed91cdd8704a1c9c760041e1baf20f) is 5.9M, max 47.3M, 41.4M free. Sep 4 17:32:15.542093 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:32:15.557506 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 4 17:32:15.557897 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:32:15.735601 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:32:15.736903 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:32:15.737905 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:32:15.739070 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:32:15.740502 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:32:15.742804 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:32:15.744298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:32:15.744423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:32:15.745836 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:32:15.745986 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:32:15.747142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:32:15.747284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:32:15.748901 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:32:15.749060 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:32:15.750423 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:32:15.750590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:32:15.751848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:32:15.753733 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:32:15.755468 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:32:15.757343 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:32:15.772067 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:32:15.786707 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:32:15.788850 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:32:15.789710 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:32:15.789768 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:32:15.792002 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:32:15.794229 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:32:15.796509 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:32:15.797817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:32:15.799634 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:32:15.801451 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:32:15.802707 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:32:15.804770 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:32:15.805817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:32:15.809805 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:32:15.813139 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:32:15.817804 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:32:15.819943 systemd-journald[1102]: Time spent on flushing to /var/log/journal/00ed91cdd8704a1c9c760041e1baf20f is 21.266ms for 856 entries. Sep 4 17:32:15.819943 systemd-journald[1102]: System Journal (/var/log/journal/00ed91cdd8704a1c9c760041e1baf20f) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:32:15.863847 systemd-journald[1102]: Received client request to flush runtime journal. Sep 4 17:32:15.863910 kernel: loop0: detected capacity change from 0 to 59688 Sep 4 17:32:15.863934 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:32:15.864131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:32:15.821595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:32:15.823197 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:32:15.826043 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:32:15.827413 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:32:15.829303 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:32:15.842672 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:32:15.858845 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:32:15.864326 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:32:15.867851 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:32:15.871521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:32:15.881240 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:32:15.887663 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:32:15.888460 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:32:15.891053 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:32:15.898601 kernel: loop1: detected capacity change from 0 to 194096 Sep 4 17:32:15.899960 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:32:15.921155 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 4 17:32:15.921171 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Sep 4 17:32:15.925834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:32:15.945631 kernel: loop2: detected capacity change from 0 to 113672 Sep 4 17:32:15.974612 kernel: loop3: detected capacity change from 0 to 59688 Sep 4 17:32:15.980041 kernel: loop4: detected capacity change from 0 to 194096 Sep 4 17:32:15.985642 kernel: loop5: detected capacity change from 0 to 113672 Sep 4 17:32:15.988541 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 4 17:32:15.989044 (sd-merge)[1178]: Merged extensions into '/usr'. Sep 4 17:32:15.995380 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:32:15.995396 systemd[1]: Reloading... Sep 4 17:32:16.049844 zram_generator::config[1200]: No configuration found. Sep 4 17:32:16.104990 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:32:16.145855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:32:16.184289 systemd[1]: Reloading finished in 188 ms. Sep 4 17:32:16.215384 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:32:16.216941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:32:16.232772 systemd[1]: Starting ensure-sysext.service... Sep 4 17:32:16.234705 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:32:16.243779 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:32:16.243796 systemd[1]: Reloading... Sep 4 17:32:16.255598 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:32:16.255876 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:32:16.256528 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:32:16.256767 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 4 17:32:16.256818 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Sep 4 17:32:16.258883 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:32:16.258896 systemd-tmpfiles[1238]: Skipping /boot Sep 4 17:32:16.265863 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:32:16.265878 systemd-tmpfiles[1238]: Skipping /boot Sep 4 17:32:16.289614 zram_generator::config[1267]: No configuration found. Sep 4 17:32:16.385877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:32:16.424781 systemd[1]: Reloading finished in 180 ms. Sep 4 17:32:16.440062 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:32:16.441935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:32:16.458194 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:32:16.460879 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:32:16.462952 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:32:16.468919 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:32:16.472907 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:32:16.477702 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:32:16.488812 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:32:16.491630 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:32:16.495974 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:32:16.499317 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:32:16.503107 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:32:16.504133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:32:16.504866 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:32:16.506299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:32:16.508547 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Sep 4 17:32:16.509332 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:32:16.509460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:32:16.515633 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:32:16.521187 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:32:16.522610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:32:16.527552 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:32:16.541077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:32:16.545879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:32:16.554042 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:32:16.555358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:32:16.557440 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:32:16.559299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:32:16.560980 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:32:16.563788 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:32:16.568407 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:32:16.570202 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:32:16.570334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:32:16.571879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:32:16.572014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:32:16.573441 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:32:16.573553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:32:16.577835 augenrules[1344]: No rules Sep 4 17:32:16.578771 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:32:16.587447 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:32:16.599790 systemd[1]: Finished ensure-sysext.service. Sep 4 17:32:16.604689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:32:16.614894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:32:16.617316 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:32:16.626796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:32:16.635801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:32:16.636682 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:32:16.637625 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1343) Sep 4 17:32:16.640307 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:32:16.642593 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1334) Sep 4 17:32:16.647764 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 4 17:32:16.648662 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:32:16.649117 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:32:16.649297 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:32:16.651428 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:32:16.652094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:32:16.653565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:32:16.654830 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:32:16.658448 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:32:16.664210 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:32:16.664381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:32:16.675014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:32:16.675086 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:32:16.681044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 4 17:32:16.696396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:32:16.713871 systemd-resolved[1304]: Positive Trust Anchors: Sep 4 17:32:16.713889 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:32:16.713919 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:32:16.722820 systemd-resolved[1304]: Defaulting to hostname 'linux'. Sep 4 17:32:16.726640 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:32:16.729233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:32:16.741645 systemd-networkd[1375]: lo: Link UP Sep 4 17:32:16.741656 systemd-networkd[1375]: lo: Gained carrier Sep 4 17:32:16.742341 systemd-networkd[1375]: Enumeration completed Sep 4 17:32:16.743843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:32:16.745279 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 4 17:32:16.745762 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:16.745772 systemd-networkd[1375]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:32:16.746930 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:32:16.747033 systemd-networkd[1375]: eth0: Link UP Sep 4 17:32:16.747044 systemd-networkd[1375]: eth0: Gained carrier Sep 4 17:32:16.747057 systemd-networkd[1375]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:32:16.750609 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:32:16.752564 systemd[1]: Reached target network.target - Network. Sep 4 17:32:16.753387 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:32:16.770865 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:32:16.771623 systemd-networkd[1375]: eth0: DHCPv4 address 10.0.0.119/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 4 17:32:16.772813 systemd-timesyncd[1376]: Network configuration changed, trying to establish connection. Sep 4 17:32:16.773726 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 4 17:32:16.773789 systemd-timesyncd[1376]: Initial clock synchronization to Wed 2024-09-04 17:32:17.163405 UTC. Sep 4 17:32:16.781637 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:32:16.801824 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:32:16.803237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:32:16.820158 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:32:16.850262 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:32:16.851530 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:32:16.852730 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:32:16.853887 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:32:16.854927 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:32:16.856087 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:32:16.857160 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:32:16.858614 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:32:16.859530 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:32:16.859566 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:32:16.860242 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:32:16.862183 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:32:16.864448 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:32:16.872532 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:32:16.874527 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:32:16.876176 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:32:16.877207 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:32:16.878032 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:32:16.878828 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:32:16.878856 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:32:16.879767 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:32:16.881446 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:32:16.883588 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:32:16.883692 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:32:16.887758 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:32:16.888622 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:32:16.890793 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:32:16.894943 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:32:16.898873 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:32:16.898981 jq[1408]: false Sep 4 17:32:16.903464 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:32:16.908638 extend-filesystems[1409]: Found loop3 Sep 4 17:32:16.908638 extend-filesystems[1409]: Found loop4 Sep 4 17:32:16.908638 extend-filesystems[1409]: Found loop5 Sep 4 17:32:16.908638 extend-filesystems[1409]: Found vda Sep 4 17:32:16.908638 extend-filesystems[1409]: Found vda1 Sep 4 17:32:16.908638 extend-filesystems[1409]: Found vda2 Sep 4 17:32:16.915820 extend-filesystems[1409]: Found vda3 Sep 4 17:32:16.915820 extend-filesystems[1409]: Found usr Sep 4 17:32:16.915820 extend-filesystems[1409]: Found vda4 Sep 4 17:32:16.915820 extend-filesystems[1409]: Found vda6 Sep 4 17:32:16.915820 extend-filesystems[1409]: Found vda7 Sep 4 17:32:16.915820 extend-filesystems[1409]: Found vda9 Sep 4 17:32:16.915820 extend-filesystems[1409]: Checking size of /dev/vda9 Sep 4 17:32:16.915191 dbus-daemon[1407]: [system] SELinux support is enabled Sep 4 17:32:16.939233 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1343) Sep 4 17:32:16.939260 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 4 17:32:16.909218 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:32:16.939391 extend-filesystems[1409]: Resized partition /dev/vda9 Sep 4 17:32:16.911083 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:32:16.940562 extend-filesystems[1429]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:32:16.912373 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:32:16.914832 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:32:16.920451 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:32:16.941829 jq[1426]: true Sep 4 17:32:16.922268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:32:16.927467 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:32:16.936628 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:32:16.936822 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:32:16.937136 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:32:16.937335 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:32:16.943365 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:32:16.943955 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:32:16.964272 update_engine[1422]: I0904 17:32:16.964061 1422 main.cc:92] Flatcar Update Engine starting Sep 4 17:32:16.964586 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 4 17:32:16.968761 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:32:16.969123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:32:16.987774 extend-filesystems[1429]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 4 17:32:16.987774 extend-filesystems[1429]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:32:16.987774 extend-filesystems[1429]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 4 17:32:16.999275 jq[1434]: true Sep 4 17:32:16.999707 update_engine[1422]: I0904 17:32:16.972131 1422 update_check_scheduler.cc:74] Next update check in 10m2s Sep 4 17:32:16.971362 (ntainerd)[1436]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:32:17.009208 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Sep 4 17:32:16.971774 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:32:16.971810 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:32:16.985761 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:32:16.985985 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:32:16.992186 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:32:16.996722 systemd-logind[1418]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:32:16.997640 systemd-logind[1418]: New seat seat0. Sep 4 17:32:17.007923 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:32:17.011610 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:32:17.013735 tar[1432]: linux-arm64/helm Sep 4 17:32:17.078119 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:32:17.080146 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:32:17.082015 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:32:17.082706 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:32:17.203319 containerd[1436]: time="2024-09-04T17:32:17.203187834Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:32:17.233179 containerd[1436]: time="2024-09-04T17:32:17.233130674Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:32:17.233364 containerd[1436]: time="2024-09-04T17:32:17.233342389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:17.234811 containerd[1436]: time="2024-09-04T17:32:17.234718561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:17.234860 containerd[1436]: time="2024-09-04T17:32:17.234812317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235081 containerd[1436]: time="2024-09-04T17:32:17.235054907Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235081 containerd[1436]: time="2024-09-04T17:32:17.235078650Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:32:17.235182 containerd[1436]: time="2024-09-04T17:32:17.235164309Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235235 containerd[1436]: time="2024-09-04T17:32:17.235218255Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235235 containerd[1436]: time="2024-09-04T17:32:17.235233776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235310 containerd[1436]: time="2024-09-04T17:32:17.235293805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235510 containerd[1436]: time="2024-09-04T17:32:17.235489370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235543 containerd[1436]: time="2024-09-04T17:32:17.235512904Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:32:17.235543 containerd[1436]: time="2024-09-04T17:32:17.235525572Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235805 containerd[1436]: time="2024-09-04T17:32:17.235780285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:32:17.235842 containerd[1436]: time="2024-09-04T17:32:17.235804196Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:32:17.235946 containerd[1436]: time="2024-09-04T17:32:17.235921905Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:32:17.235971 containerd[1436]: time="2024-09-04T17:32:17.235944347Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:32:17.240717 containerd[1436]: time="2024-09-04T17:32:17.240688588Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:32:17.240748 containerd[1436]: time="2024-09-04T17:32:17.240724161Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:32:17.240748 containerd[1436]: time="2024-09-04T17:32:17.240738171Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:32:17.240997 containerd[1436]: time="2024-09-04T17:32:17.240972413Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:32:17.241087 containerd[1436]: time="2024-09-04T17:32:17.241011258Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:32:17.241112 containerd[1436]: time="2024-09-04T17:32:17.241089073Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:32:17.241112 containerd[1436]: time="2024-09-04T17:32:17.241104175Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:32:17.241253 containerd[1436]: time="2024-09-04T17:32:17.241232915Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:32:17.241286 containerd[1436]: time="2024-09-04T17:32:17.241256826Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:32:17.241286 containerd[1436]: time="2024-09-04T17:32:17.241271005Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:32:17.241369 containerd[1436]: time="2024-09-04T17:32:17.241284261Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:32:17.241369 containerd[1436]: time="2024-09-04T17:32:17.241300579Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.241369 containerd[1436]: time="2024-09-04T17:32:17.241316897Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.241369 containerd[1436]: time="2024-09-04T17:32:17.241329817Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.241369 containerd[1436]: time="2024-09-04T17:32:17.241342654Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.241369 containerd[1436]: time="2024-09-04T17:32:17.241356245Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.241369 containerd[1436]: time="2024-09-04T17:32:17.241369962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.241491 containerd[1436]: time="2024-09-04T17:32:17.241382882Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.241491 containerd[1436]: time="2024-09-04T17:32:17.241395257Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:32:17.241491 containerd[1436]: time="2024-09-04T17:32:17.241486538Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:32:17.242022 containerd[1436]: time="2024-09-04T17:32:17.241940927Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:32:17.242061 containerd[1436]: time="2024-09-04T17:32:17.242037242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242061 containerd[1436]: time="2024-09-04T17:32:17.242055028Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:32:17.242103 containerd[1436]: time="2024-09-04T17:32:17.242079065Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:32:17.242279 containerd[1436]: time="2024-09-04T17:32:17.242258018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242305 containerd[1436]: time="2024-09-04T17:32:17.242282936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242305 containerd[1436]: time="2024-09-04T17:32:17.242296863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242392 containerd[1436]: time="2024-09-04T17:32:17.242374594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242420 containerd[1436]: time="2024-09-04T17:32:17.242396659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242420 containerd[1436]: time="2024-09-04T17:32:17.242411089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242491 containerd[1436]: time="2024-09-04T17:32:17.242471621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242517 containerd[1436]: time="2024-09-04T17:32:17.242494651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242517 containerd[1436]: time="2024-09-04T17:32:17.242509040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:32:17.242694 containerd[1436]: time="2024-09-04T17:32:17.242674192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242721 containerd[1436]: time="2024-09-04T17:32:17.242699110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242721 containerd[1436]: time="2024-09-04T17:32:17.242713079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242757 containerd[1436]: time="2024-09-04T17:32:17.242726293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242757 containerd[1436]: time="2024-09-04T17:32:17.242739213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242793 containerd[1436]: time="2024-09-04T17:32:17.242758132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242793 containerd[1436]: time="2024-09-04T17:32:17.242771975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.242793 containerd[1436]: time="2024-09-04T17:32:17.242783679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:32:17.243154 containerd[1436]: time="2024-09-04T17:32:17.243095652Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:32:17.243154 containerd[1436]: time="2024-09-04T17:32:17.243159078Z" level=info msg="Connect containerd service" Sep 4 17:32:17.243282 containerd[1436]: time="2024-09-04T17:32:17.243187771Z" level=info msg="using legacy CRI server" Sep 4 17:32:17.243282 containerd[1436]: time="2024-09-04T17:32:17.243194944Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:32:17.243355 containerd[1436]: time="2024-09-04T17:32:17.243337906Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:32:17.244098 containerd[1436]: time="2024-09-04T17:32:17.244068402Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:32:17.244158 containerd[1436]: time="2024-09-04T17:32:17.244141100Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:32:17.244185 containerd[1436]: time="2024-09-04T17:32:17.244163374Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:32:17.244368 containerd[1436]: time="2024-09-04T17:32:17.244175665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:32:17.244405 containerd[1436]: time="2024-09-04T17:32:17.244376306Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:32:17.244530 containerd[1436]: time="2024-09-04T17:32:17.244341069Z" level=info msg="Start subscribing containerd event" Sep 4 17:32:17.244557 containerd[1436]: time="2024-09-04T17:32:17.244544311Z" level=info msg="Start recovering state" Sep 4 17:32:17.244631 containerd[1436]: time="2024-09-04T17:32:17.244616463Z" level=info msg="Start event monitor" Sep 4 17:32:17.244659 containerd[1436]: time="2024-09-04T17:32:17.244637731Z" level=info msg="Start snapshots syncer" Sep 4 17:32:17.244659 containerd[1436]: time="2024-09-04T17:32:17.244647254Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:32:17.244659 containerd[1436]: time="2024-09-04T17:32:17.244654385Z" level=info msg="Start streaming server" Sep 4 17:32:17.245303 containerd[1436]: time="2024-09-04T17:32:17.245279254Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:32:17.245344 containerd[1436]: time="2024-09-04T17:32:17.245329635Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:32:17.245527 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:32:17.247972 containerd[1436]: time="2024-09-04T17:32:17.247942415Z" level=info msg="containerd successfully booted in 0.047777s" Sep 4 17:32:17.357356 tar[1432]: linux-arm64/LICENSE Sep 4 17:32:17.357445 tar[1432]: linux-arm64/README.md Sep 4 17:32:17.369333 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:32:17.953720 sshd_keygen[1430]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:32:17.972996 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:32:17.990193 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:32:17.996501 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:32:17.996717 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:32:17.999873 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:32:18.015528 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:32:18.019106 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:32:18.021532 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:32:18.023313 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:32:18.804116 systemd-networkd[1375]: eth0: Gained IPv6LL Sep 4 17:32:18.810461 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:32:18.812324 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:32:18.825901 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 4 17:32:18.829069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:18.831290 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:32:18.848901 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 4 17:32:18.849085 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 4 17:32:18.851285 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:32:18.857488 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:32:19.492923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:19.494314 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:32:19.495552 systemd[1]: Startup finished in 533ms (kernel) + 4.459s (initrd) + 4.371s (userspace) = 9.363s. Sep 4 17:32:19.497842 (kubelet)[1520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:32:20.089956 kubelet[1520]: E0904 17:32:20.089825 1520 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:32:20.092479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:32:20.092647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:32:23.994551 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:32:23.995910 systemd[1]: Started sshd@0-10.0.0.119:22-10.0.0.1:41112.service - OpenSSH per-connection server daemon (10.0.0.1:41112). Sep 4 17:32:24.085306 sshd[1534]: Accepted publickey for core from 10.0.0.1 port 41112 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:32:24.089257 sshd[1534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:24.107661 systemd-logind[1418]: New session 1 of user core. Sep 4 17:32:24.108202 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:32:24.116922 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:32:24.126726 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:32:24.130945 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:32:24.136633 (systemd)[1538]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:24.218587 systemd[1538]: Queued start job for default target default.target. Sep 4 17:32:24.230641 systemd[1538]: Created slice app.slice - User Application Slice. Sep 4 17:32:24.230672 systemd[1538]: Reached target paths.target - Paths. Sep 4 17:32:24.230684 systemd[1538]: Reached target timers.target - Timers. Sep 4 17:32:24.231948 systemd[1538]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:32:24.243764 systemd[1538]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:32:24.243887 systemd[1538]: Reached target sockets.target - Sockets. Sep 4 17:32:24.243906 systemd[1538]: Reached target basic.target - Basic System. Sep 4 17:32:24.243944 systemd[1538]: Reached target default.target - Main User Target. Sep 4 17:32:24.243984 systemd[1538]: Startup finished in 101ms. Sep 4 17:32:24.244216 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:32:24.245648 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:32:24.307081 systemd[1]: Started sshd@1-10.0.0.119:22-10.0.0.1:41116.service - OpenSSH per-connection server daemon (10.0.0.1:41116). Sep 4 17:32:24.345893 sshd[1549]: Accepted publickey for core from 10.0.0.1 port 41116 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:32:24.347183 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:24.351763 systemd-logind[1418]: New session 2 of user core. Sep 4 17:32:24.358814 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:32:24.416002 sshd[1549]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:24.432483 systemd[1]: sshd@1-10.0.0.119:22-10.0.0.1:41116.service: Deactivated successfully. Sep 4 17:32:24.435085 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:32:24.439251 systemd-logind[1418]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:32:24.455478 systemd[1]: Started sshd@2-10.0.0.119:22-10.0.0.1:41126.service - OpenSSH per-connection server daemon (10.0.0.1:41126). Sep 4 17:32:24.456391 systemd-logind[1418]: Removed session 2. Sep 4 17:32:24.492471 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 41126 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:32:24.493851 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:24.498656 systemd-logind[1418]: New session 3 of user core. Sep 4 17:32:24.507815 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:32:24.561957 sshd[1556]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:24.575253 systemd[1]: sshd@2-10.0.0.119:22-10.0.0.1:41126.service: Deactivated successfully. Sep 4 17:32:24.576797 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:32:24.579664 systemd-logind[1418]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:32:24.580316 systemd[1]: Started sshd@3-10.0.0.119:22-10.0.0.1:41130.service - OpenSSH per-connection server daemon (10.0.0.1:41130). Sep 4 17:32:24.581126 systemd-logind[1418]: Removed session 3. Sep 4 17:32:24.621787 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 41130 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:32:24.623195 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:24.627621 systemd-logind[1418]: New session 4 of user core. Sep 4 17:32:24.639800 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:32:24.694653 sshd[1563]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:24.703233 systemd[1]: sshd@3-10.0.0.119:22-10.0.0.1:41130.service: Deactivated successfully. Sep 4 17:32:24.706115 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:32:24.707468 systemd-logind[1418]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:32:24.708795 systemd[1]: Started sshd@4-10.0.0.119:22-10.0.0.1:41134.service - OpenSSH per-connection server daemon (10.0.0.1:41134). Sep 4 17:32:24.709780 systemd-logind[1418]: Removed session 4. Sep 4 17:32:24.748617 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 41134 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:32:24.750356 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:24.754821 systemd-logind[1418]: New session 5 of user core. Sep 4 17:32:24.765837 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:32:24.828261 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:32:24.828500 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:24.843528 sudo[1573]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:24.845954 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:24.852126 systemd[1]: sshd@4-10.0.0.119:22-10.0.0.1:41134.service: Deactivated successfully. Sep 4 17:32:24.854333 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:32:24.855642 systemd-logind[1418]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:32:24.856947 systemd[1]: Started sshd@5-10.0.0.119:22-10.0.0.1:41144.service - OpenSSH per-connection server daemon (10.0.0.1:41144). Sep 4 17:32:24.857798 systemd-logind[1418]: Removed session 5. Sep 4 17:32:24.897901 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 41144 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:32:24.899695 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:24.905989 systemd-logind[1418]: New session 6 of user core. Sep 4 17:32:24.915186 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:32:24.970538 sudo[1582]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:32:24.970963 sudo[1582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:24.974356 sudo[1582]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:24.979363 sudo[1581]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:32:24.979724 sudo[1581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:25.001018 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:32:25.002581 auditctl[1585]: No rules Sep 4 17:32:25.002954 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:32:25.003132 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:32:25.007006 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:32:25.031106 augenrules[1603]: No rules Sep 4 17:32:25.032437 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:32:25.033926 sudo[1581]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:25.035636 sshd[1578]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:25.047043 systemd[1]: sshd@5-10.0.0.119:22-10.0.0.1:41144.service: Deactivated successfully. Sep 4 17:32:25.049554 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:32:25.051624 systemd-logind[1418]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:32:25.067998 systemd[1]: Started sshd@6-10.0.0.119:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Sep 4 17:32:25.069185 systemd-logind[1418]: Removed session 6. Sep 4 17:32:25.102987 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:32:25.104349 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:32:25.111407 systemd-logind[1418]: New session 7 of user core. Sep 4 17:32:25.125574 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:32:25.178502 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:32:25.178785 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:32:25.311930 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:32:25.312359 (dockerd)[1624]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:32:25.554651 dockerd[1624]: time="2024-09-04T17:32:25.554524009Z" level=info msg="Starting up" Sep 4 17:32:25.665898 dockerd[1624]: time="2024-09-04T17:32:25.665749612Z" level=info msg="Loading containers: start." Sep 4 17:32:25.773627 kernel: Initializing XFRM netlink socket Sep 4 17:32:25.850535 systemd-networkd[1375]: docker0: Link UP Sep 4 17:32:25.869016 dockerd[1624]: time="2024-09-04T17:32:25.868490286Z" level=info msg="Loading containers: done." Sep 4 17:32:25.938550 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2609118232-merged.mount: Deactivated successfully. Sep 4 17:32:25.942049 dockerd[1624]: time="2024-09-04T17:32:25.941988551Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:32:25.942214 dockerd[1624]: time="2024-09-04T17:32:25.942182870Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:32:25.942325 dockerd[1624]: time="2024-09-04T17:32:25.942300893Z" level=info msg="Daemon has completed initialization" Sep 4 17:32:25.977281 dockerd[1624]: time="2024-09-04T17:32:25.977181765Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:32:25.977292 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:32:26.579542 containerd[1436]: time="2024-09-04T17:32:26.579498538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:32:27.318848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410315910.mount: Deactivated successfully. Sep 4 17:32:28.979530 containerd[1436]: time="2024-09-04T17:32:28.979285703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:28.980596 containerd[1436]: time="2024-09-04T17:32:28.980442047Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=29943742" Sep 4 17:32:28.981457 containerd[1436]: time="2024-09-04T17:32:28.981408324Z" level=info msg="ImageCreate event name:\"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:28.984288 containerd[1436]: time="2024-09-04T17:32:28.984251698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:28.985534 containerd[1436]: time="2024-09-04T17:32:28.985492297Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"29940540\" in 2.405947736s" Sep 4 17:32:28.985534 containerd[1436]: time="2024-09-04T17:32:28.985533272Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\"" Sep 4 17:32:29.006365 containerd[1436]: time="2024-09-04T17:32:29.006330400Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:32:30.343031 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:32:30.355961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:30.461220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:30.466911 (kubelet)[1838]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:32:30.525560 kubelet[1838]: E0904 17:32:30.525494 1838 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:32:30.529046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:32:30.529211 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:32:30.751633 containerd[1436]: time="2024-09-04T17:32:30.751464094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:30.753854 containerd[1436]: time="2024-09-04T17:32:30.753545776Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=26881134" Sep 4 17:32:30.757599 containerd[1436]: time="2024-09-04T17:32:30.757524931Z" level=info msg="ImageCreate event name:\"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:30.764060 containerd[1436]: time="2024-09-04T17:32:30.763988074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:30.765281 containerd[1436]: time="2024-09-04T17:32:30.765181958Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"28368399\" in 1.758813845s" Sep 4 17:32:30.765281 containerd[1436]: time="2024-09-04T17:32:30.765223955Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\"" Sep 4 17:32:30.789356 containerd[1436]: time="2024-09-04T17:32:30.789106595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:32:33.062819 containerd[1436]: time="2024-09-04T17:32:33.062746140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:33.063963 containerd[1436]: time="2024-09-04T17:32:33.063914100Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=16154065" Sep 4 17:32:33.065061 containerd[1436]: time="2024-09-04T17:32:33.065002404Z" level=info msg="ImageCreate event name:\"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:33.068809 containerd[1436]: time="2024-09-04T17:32:33.068776469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:33.070120 containerd[1436]: time="2024-09-04T17:32:33.069984297Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"17641348\" in 2.280839283s" Sep 4 17:32:33.070120 containerd[1436]: time="2024-09-04T17:32:33.070019660Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\"" Sep 4 17:32:33.091992 containerd[1436]: time="2024-09-04T17:32:33.091954995Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:32:34.096189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3193250640.mount: Deactivated successfully. Sep 4 17:32:34.445568 containerd[1436]: time="2024-09-04T17:32:34.445433283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:34.446357 containerd[1436]: time="2024-09-04T17:32:34.446251822Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=25646049" Sep 4 17:32:34.447121 containerd[1436]: time="2024-09-04T17:32:34.447037758Z" level=info msg="ImageCreate event name:\"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:34.449025 containerd[1436]: time="2024-09-04T17:32:34.448983623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:34.449802 containerd[1436]: time="2024-09-04T17:32:34.449771609Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"25645066\" in 1.357777195s" Sep 4 17:32:34.449847 containerd[1436]: time="2024-09-04T17:32:34.449807549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\"" Sep 4 17:32:34.471296 containerd[1436]: time="2024-09-04T17:32:34.471222072Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:32:35.111777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827879755.mount: Deactivated successfully. Sep 4 17:32:36.093199 containerd[1436]: time="2024-09-04T17:32:36.093139160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:36.093739 containerd[1436]: time="2024-09-04T17:32:36.093701479Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Sep 4 17:32:36.094533 containerd[1436]: time="2024-09-04T17:32:36.094503839Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:36.097824 containerd[1436]: time="2024-09-04T17:32:36.097787645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:36.098888 containerd[1436]: time="2024-09-04T17:32:36.098846871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.627554457s" Sep 4 17:32:36.098934 containerd[1436]: time="2024-09-04T17:32:36.098891321Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:32:36.118945 containerd[1436]: time="2024-09-04T17:32:36.118902981Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:32:36.561858 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123594021.mount: Deactivated successfully. Sep 4 17:32:36.567311 containerd[1436]: time="2024-09-04T17:32:36.566697869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:36.567311 containerd[1436]: time="2024-09-04T17:32:36.567228988Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Sep 4 17:32:36.568039 containerd[1436]: time="2024-09-04T17:32:36.568006613Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:36.570499 containerd[1436]: time="2024-09-04T17:32:36.570468624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:36.571216 containerd[1436]: time="2024-09-04T17:32:36.571189832Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 452.248343ms" Sep 4 17:32:36.571256 containerd[1436]: time="2024-09-04T17:32:36.571221393Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:32:36.590094 containerd[1436]: time="2024-09-04T17:32:36.590051236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:32:37.265205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2607274172.mount: Deactivated successfully. Sep 4 17:32:39.394355 containerd[1436]: time="2024-09-04T17:32:39.394252265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:39.394831 containerd[1436]: time="2024-09-04T17:32:39.394768314Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Sep 4 17:32:39.395913 containerd[1436]: time="2024-09-04T17:32:39.395875404Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:39.402168 containerd[1436]: time="2024-09-04T17:32:39.402121207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:32:39.403070 containerd[1436]: time="2024-09-04T17:32:39.402895160Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.812805146s" Sep 4 17:32:39.403070 containerd[1436]: time="2024-09-04T17:32:39.402929528Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Sep 4 17:32:40.779500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:32:40.788790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:40.888451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:40.892565 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:32:40.937070 kubelet[2065]: E0904 17:32:40.937014 2065 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:32:40.940162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:32:40.940418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:32:43.687451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:43.703912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:43.732842 systemd[1]: Reloading requested from client PID 2080 ('systemctl') (unit session-7.scope)... Sep 4 17:32:43.732871 systemd[1]: Reloading... Sep 4 17:32:43.805625 zram_generator::config[2120]: No configuration found. Sep 4 17:32:43.957841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:32:44.012906 systemd[1]: Reloading finished in 279 ms. Sep 4 17:32:44.058858 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:44.063275 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:32:44.064629 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:44.074941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:44.170730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:44.175710 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:32:44.219083 kubelet[2164]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:32:44.219083 kubelet[2164]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:32:44.219083 kubelet[2164]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:32:44.219858 kubelet[2164]: I0904 17:32:44.219812 2164 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:32:45.052375 kubelet[2164]: I0904 17:32:45.052327 2164 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:32:45.052375 kubelet[2164]: I0904 17:32:45.052359 2164 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:32:45.052621 kubelet[2164]: I0904 17:32:45.052606 2164 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:32:45.104594 kubelet[2164]: I0904 17:32:45.104502 2164 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:32:45.105758 kubelet[2164]: E0904 17:32:45.105726 2164 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.119:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.118021 kubelet[2164]: I0904 17:32:45.117984 2164 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:32:45.119250 kubelet[2164]: I0904 17:32:45.119189 2164 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:32:45.119424 kubelet[2164]: I0904 17:32:45.119244 2164 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:32:45.119651 kubelet[2164]: I0904 17:32:45.119632 2164 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:32:45.119651 kubelet[2164]: I0904 17:32:45.119646 2164 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:32:45.120166 kubelet[2164]: I0904 17:32:45.120143 2164 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:32:45.126263 kubelet[2164]: I0904 17:32:45.125569 2164 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:32:45.126305 kubelet[2164]: I0904 17:32:45.126270 2164 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:32:45.126646 kubelet[2164]: W0904 17:32:45.126563 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.126673 kubelet[2164]: E0904 17:32:45.126655 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.126966 kubelet[2164]: I0904 17:32:45.126945 2164 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:32:45.127221 kubelet[2164]: I0904 17:32:45.127205 2164 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:32:45.130027 kubelet[2164]: I0904 17:32:45.129993 2164 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:32:45.130152 kubelet[2164]: W0904 17:32:45.130102 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.130152 kubelet[2164]: E0904 17:32:45.130152 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.130430 kubelet[2164]: I0904 17:32:45.130403 2164 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:32:45.130577 kubelet[2164]: W0904 17:32:45.130557 2164 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:32:45.137281 kubelet[2164]: I0904 17:32:45.137243 2164 server.go:1264] "Started kubelet" Sep 4 17:32:45.140129 kubelet[2164]: I0904 17:32:45.138592 2164 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:32:45.140129 kubelet[2164]: E0904 17:32:45.139391 2164 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ae1fc2edf0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:32:45.137215245 +0000 UTC m=+0.958020210,LastTimestamp:2024-09-04 17:32:45.137215245 +0000 UTC m=+0.958020210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:32:45.140129 kubelet[2164]: I0904 17:32:45.139618 2164 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:32:45.140129 kubelet[2164]: I0904 17:32:45.139653 2164 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:32:45.140129 kubelet[2164]: I0904 17:32:45.139844 2164 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:32:45.140129 kubelet[2164]: I0904 17:32:45.139926 2164 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:32:45.140129 kubelet[2164]: I0904 17:32:45.140024 2164 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:32:45.140129 kubelet[2164]: E0904 17:32:45.140110 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="200ms" Sep 4 17:32:45.141143 kubelet[2164]: W0904 17:32:45.140609 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.141143 kubelet[2164]: E0904 17:32:45.140652 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.141143 kubelet[2164]: I0904 17:32:45.140675 2164 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:32:45.141143 kubelet[2164]: I0904 17:32:45.141109 2164 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:32:45.141798 kubelet[2164]: I0904 17:32:45.141770 2164 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:32:45.143486 kubelet[2164]: I0904 17:32:45.143455 2164 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:32:45.143486 kubelet[2164]: I0904 17:32:45.143477 2164 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:32:45.148346 kubelet[2164]: E0904 17:32:45.148313 2164 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:32:45.156296 kubelet[2164]: I0904 17:32:45.156139 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:32:45.157443 kubelet[2164]: I0904 17:32:45.157407 2164 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:32:45.157771 kubelet[2164]: I0904 17:32:45.157745 2164 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:32:45.157817 kubelet[2164]: I0904 17:32:45.157781 2164 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:32:45.157859 kubelet[2164]: E0904 17:32:45.157830 2164 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:32:45.159239 kubelet[2164]: W0904 17:32:45.159155 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.159320 kubelet[2164]: E0904 17:32:45.159255 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:45.159387 kubelet[2164]: I0904 17:32:45.159369 2164 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:32:45.159387 kubelet[2164]: I0904 17:32:45.159386 2164 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:32:45.159490 kubelet[2164]: I0904 17:32:45.159408 2164 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:32:45.240982 kubelet[2164]: I0904 17:32:45.240940 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:32:45.241459 kubelet[2164]: E0904 17:32:45.241426 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 4 17:32:45.258647 kubelet[2164]: E0904 17:32:45.258613 2164 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:32:45.284689 kubelet[2164]: I0904 17:32:45.284649 2164 policy_none.go:49] "None policy: Start" Sep 4 17:32:45.285403 kubelet[2164]: I0904 17:32:45.285371 2164 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:32:45.285403 kubelet[2164]: I0904 17:32:45.285405 2164 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:32:45.291234 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:32:45.305510 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:32:45.308924 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:32:45.320428 kubelet[2164]: I0904 17:32:45.320361 2164 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:32:45.320787 kubelet[2164]: I0904 17:32:45.320596 2164 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:32:45.320787 kubelet[2164]: I0904 17:32:45.320709 2164 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:32:45.322359 kubelet[2164]: E0904 17:32:45.322328 2164 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 4 17:32:45.340867 kubelet[2164]: E0904 17:32:45.340822 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="400ms" Sep 4 17:32:45.442526 kubelet[2164]: I0904 17:32:45.442484 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:32:45.442881 kubelet[2164]: E0904 17:32:45.442846 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 4 17:32:45.459136 kubelet[2164]: I0904 17:32:45.459066 2164 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:32:45.460201 kubelet[2164]: I0904 17:32:45.460174 2164 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:32:45.461055 kubelet[2164]: I0904 17:32:45.461013 2164 topology_manager.go:215] "Topology Admit Handler" podUID="248a9470b4d6a24ba218b8c3517d29a0" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:32:45.467611 systemd[1]: Created slice kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice - libcontainer container kubepods-burstable-poda75cc901e91bc66fd9615154dc537be7.slice. Sep 4 17:32:45.482049 systemd[1]: Created slice kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice - libcontainer container kubepods-burstable-podab09c4a38f15561465451a45cd787c5b.slice. Sep 4 17:32:45.494810 systemd[1]: Created slice kubepods-burstable-pod248a9470b4d6a24ba218b8c3517d29a0.slice - libcontainer container kubepods-burstable-pod248a9470b4d6a24ba218b8c3517d29a0.slice. Sep 4 17:32:45.542662 kubelet[2164]: I0904 17:32:45.542617 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:45.542662 kubelet[2164]: I0904 17:32:45.542657 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:45.542737 kubelet[2164]: I0904 17:32:45.542681 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/248a9470b4d6a24ba218b8c3517d29a0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"248a9470b4d6a24ba218b8c3517d29a0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:32:45.542737 kubelet[2164]: I0904 17:32:45.542696 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/248a9470b4d6a24ba218b8c3517d29a0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"248a9470b4d6a24ba218b8c3517d29a0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:32:45.542737 kubelet[2164]: I0904 17:32:45.542712 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:45.542737 kubelet[2164]: I0904 17:32:45.542728 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:45.542832 kubelet[2164]: I0904 17:32:45.542745 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:45.542832 kubelet[2164]: I0904 17:32:45.542761 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:32:45.542832 kubelet[2164]: I0904 17:32:45.542777 2164 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/248a9470b4d6a24ba218b8c3517d29a0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"248a9470b4d6a24ba218b8c3517d29a0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:32:45.742021 kubelet[2164]: E0904 17:32:45.741907 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="800ms" Sep 4 17:32:45.780403 kubelet[2164]: E0904 17:32:45.780313 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:45.780975 containerd[1436]: time="2024-09-04T17:32:45.780936915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:45.793812 kubelet[2164]: E0904 17:32:45.793768 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:45.794963 containerd[1436]: time="2024-09-04T17:32:45.794714539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:45.796990 kubelet[2164]: E0904 17:32:45.796964 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:45.797342 containerd[1436]: time="2024-09-04T17:32:45.797304336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:248a9470b4d6a24ba218b8c3517d29a0,Namespace:kube-system,Attempt:0,}" Sep 4 17:32:45.844943 kubelet[2164]: I0904 17:32:45.844688 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:32:45.845064 kubelet[2164]: E0904 17:32:45.845003 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 4 17:32:46.140088 kubelet[2164]: W0904 17:32:46.139940 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.140088 kubelet[2164]: E0904 17:32:46.140008 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.119:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.144433 kubelet[2164]: E0904 17:32:46.144331 2164 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.119:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.119:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17f21ae1fc2edf0d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-09-04 17:32:45.137215245 +0000 UTC m=+0.958020210,LastTimestamp:2024-09-04 17:32:45.137215245 +0000 UTC m=+0.958020210,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 4 17:32:46.334993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786816216.mount: Deactivated successfully. Sep 4 17:32:46.340454 containerd[1436]: time="2024-09-04T17:32:46.340034398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:32:46.341517 containerd[1436]: time="2024-09-04T17:32:46.341476939Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:32:46.342260 containerd[1436]: time="2024-09-04T17:32:46.342222574Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:32:46.343595 containerd[1436]: time="2024-09-04T17:32:46.343107470Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:32:46.343595 containerd[1436]: time="2024-09-04T17:32:46.343409777Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 4 17:32:46.345474 containerd[1436]: time="2024-09-04T17:32:46.344022958Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:32:46.345474 containerd[1436]: time="2024-09-04T17:32:46.344285904Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:32:46.346430 containerd[1436]: time="2024-09-04T17:32:46.346390275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:32:46.349729 containerd[1436]: time="2024-09-04T17:32:46.349691378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.304348ms" Sep 4 17:32:46.351300 containerd[1436]: time="2024-09-04T17:32:46.351188414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 570.148903ms" Sep 4 17:32:46.353898 containerd[1436]: time="2024-09-04T17:32:46.353858799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.036496ms" Sep 4 17:32:46.487762 kubelet[2164]: W0904 17:32:46.487607 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.487762 kubelet[2164]: E0904 17:32:46.487680 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.119:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.488125 kubelet[2164]: W0904 17:32:46.487887 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.488125 kubelet[2164]: E0904 17:32:46.487920 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.119:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.522816 kubelet[2164]: W0904 17:32:46.522758 2164 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.522816 kubelet[2164]: E0904 17:32:46.522807 2164 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.119:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.119:6443: connect: connection refused Sep 4 17:32:46.543497 kubelet[2164]: E0904 17:32:46.543416 2164 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.119:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.119:6443: connect: connection refused" interval="1.6s" Sep 4 17:32:46.548447 containerd[1436]: time="2024-09-04T17:32:46.544066230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:46.548447 containerd[1436]: time="2024-09-04T17:32:46.544203089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:46.548447 containerd[1436]: time="2024-09-04T17:32:46.544223750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:46.548447 containerd[1436]: time="2024-09-04T17:32:46.544236923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:46.549226 containerd[1436]: time="2024-09-04T17:32:46.549121750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:46.549333 containerd[1436]: time="2024-09-04T17:32:46.549185655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:46.549333 containerd[1436]: time="2024-09-04T17:32:46.549243593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:46.549333 containerd[1436]: time="2024-09-04T17:32:46.549260971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:46.551160 containerd[1436]: time="2024-09-04T17:32:46.550980673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:32:46.551160 containerd[1436]: time="2024-09-04T17:32:46.551042135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:46.551160 containerd[1436]: time="2024-09-04T17:32:46.551067280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:32:46.551160 containerd[1436]: time="2024-09-04T17:32:46.551080173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:32:46.571787 systemd[1]: Started cri-containerd-e60d8acd78aa57458bcf12b0379ec7f4e84aef5fb14f72cbb20895da880e4d36.scope - libcontainer container e60d8acd78aa57458bcf12b0379ec7f4e84aef5fb14f72cbb20895da880e4d36. Sep 4 17:32:46.576208 systemd[1]: Started cri-containerd-3d29f1585d995e49bedf2768fdc8c7baba1d263e1669810d5d8df7c56b59aebf.scope - libcontainer container 3d29f1585d995e49bedf2768fdc8c7baba1d263e1669810d5d8df7c56b59aebf. Sep 4 17:32:46.577522 systemd[1]: Started cri-containerd-73419105fd79a50132348c1b071da1c9441cf374d907792bb3fe30c427f4dfb2.scope - libcontainer container 73419105fd79a50132348c1b071da1c9441cf374d907792bb3fe30c427f4dfb2. Sep 4 17:32:46.611820 containerd[1436]: time="2024-09-04T17:32:46.611771158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:248a9470b4d6a24ba218b8c3517d29a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"e60d8acd78aa57458bcf12b0379ec7f4e84aef5fb14f72cbb20895da880e4d36\"" Sep 4 17:32:46.617605 kubelet[2164]: E0904 17:32:46.616691 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:46.620654 containerd[1436]: time="2024-09-04T17:32:46.620497596Z" level=info msg="CreateContainer within sandbox \"e60d8acd78aa57458bcf12b0379ec7f4e84aef5fb14f72cbb20895da880e4d36\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:32:46.620830 containerd[1436]: time="2024-09-04T17:32:46.620761703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ab09c4a38f15561465451a45cd787c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"73419105fd79a50132348c1b071da1c9441cf374d907792bb3fe30c427f4dfb2\"" Sep 4 17:32:46.621373 kubelet[2164]: E0904 17:32:46.621330 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:46.625558 containerd[1436]: time="2024-09-04T17:32:46.625518921Z" level=info msg="CreateContainer within sandbox \"73419105fd79a50132348c1b071da1c9441cf374d907792bb3fe30c427f4dfb2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:32:46.626067 containerd[1436]: time="2024-09-04T17:32:46.625825752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a75cc901e91bc66fd9615154dc537be7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d29f1585d995e49bedf2768fdc8c7baba1d263e1669810d5d8df7c56b59aebf\"" Sep 4 17:32:46.626482 kubelet[2164]: E0904 17:32:46.626456 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:46.628843 containerd[1436]: time="2024-09-04T17:32:46.628541742Z" level=info msg="CreateContainer within sandbox \"3d29f1585d995e49bedf2768fdc8c7baba1d263e1669810d5d8df7c56b59aebf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:32:46.641991 containerd[1436]: time="2024-09-04T17:32:46.641947399Z" level=info msg="CreateContainer within sandbox \"e60d8acd78aa57458bcf12b0379ec7f4e84aef5fb14f72cbb20895da880e4d36\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fcc2080352fd3bc3508494d25d0fb9f404f1db7ca90ada38c95f1c237e181c7d\"" Sep 4 17:32:46.642700 containerd[1436]: time="2024-09-04T17:32:46.642676777Z" level=info msg="StartContainer for \"fcc2080352fd3bc3508494d25d0fb9f404f1db7ca90ada38c95f1c237e181c7d\"" Sep 4 17:32:46.647125 kubelet[2164]: I0904 17:32:46.647094 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:32:46.647652 kubelet[2164]: E0904 17:32:46.647622 2164 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.119:6443/api/v1/nodes\": dial tcp 10.0.0.119:6443: connect: connection refused" node="localhost" Sep 4 17:32:46.649562 containerd[1436]: time="2024-09-04T17:32:46.649516184Z" level=info msg="CreateContainer within sandbox \"73419105fd79a50132348c1b071da1c9441cf374d907792bb3fe30c427f4dfb2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b5e1aebf4aa5ca6cfc775e7575b3c64a58069a7fc9be91e824079b5eb8720814\"" Sep 4 17:32:46.650229 containerd[1436]: time="2024-09-04T17:32:46.650201998Z" level=info msg="StartContainer for \"b5e1aebf4aa5ca6cfc775e7575b3c64a58069a7fc9be91e824079b5eb8720814\"" Sep 4 17:32:46.652464 containerd[1436]: time="2024-09-04T17:32:46.652408513Z" level=info msg="CreateContainer within sandbox \"3d29f1585d995e49bedf2768fdc8c7baba1d263e1669810d5d8df7c56b59aebf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"36ccf7529124a20da1bf34d67756b218b96783fcbc0d29f90d4276513d3d13e1\"" Sep 4 17:32:46.654718 containerd[1436]: time="2024-09-04T17:32:46.654635128Z" level=info msg="StartContainer for \"36ccf7529124a20da1bf34d67756b218b96783fcbc0d29f90d4276513d3d13e1\"" Sep 4 17:32:46.672766 systemd[1]: Started cri-containerd-fcc2080352fd3bc3508494d25d0fb9f404f1db7ca90ada38c95f1c237e181c7d.scope - libcontainer container fcc2080352fd3bc3508494d25d0fb9f404f1db7ca90ada38c95f1c237e181c7d. Sep 4 17:32:46.677170 systemd[1]: Started cri-containerd-36ccf7529124a20da1bf34d67756b218b96783fcbc0d29f90d4276513d3d13e1.scope - libcontainer container 36ccf7529124a20da1bf34d67756b218b96783fcbc0d29f90d4276513d3d13e1. Sep 4 17:32:46.678472 systemd[1]: Started cri-containerd-b5e1aebf4aa5ca6cfc775e7575b3c64a58069a7fc9be91e824079b5eb8720814.scope - libcontainer container b5e1aebf4aa5ca6cfc775e7575b3c64a58069a7fc9be91e824079b5eb8720814. Sep 4 17:32:46.752990 containerd[1436]: time="2024-09-04T17:32:46.752877823Z" level=info msg="StartContainer for \"b5e1aebf4aa5ca6cfc775e7575b3c64a58069a7fc9be91e824079b5eb8720814\" returns successfully" Sep 4 17:32:46.753105 containerd[1436]: time="2024-09-04T17:32:46.752896922Z" level=info msg="StartContainer for \"fcc2080352fd3bc3508494d25d0fb9f404f1db7ca90ada38c95f1c237e181c7d\" returns successfully" Sep 4 17:32:46.753105 containerd[1436]: time="2024-09-04T17:32:46.752903689Z" level=info msg="StartContainer for \"36ccf7529124a20da1bf34d67756b218b96783fcbc0d29f90d4276513d3d13e1\" returns successfully" Sep 4 17:32:47.171522 kubelet[2164]: E0904 17:32:47.171421 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:47.172793 kubelet[2164]: E0904 17:32:47.172767 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:47.173931 kubelet[2164]: E0904 17:32:47.173872 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:48.176091 kubelet[2164]: E0904 17:32:48.175967 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:48.248906 kubelet[2164]: I0904 17:32:48.248823 2164 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:32:49.210482 kubelet[2164]: E0904 17:32:49.210415 2164 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 4 17:32:49.251509 kubelet[2164]: I0904 17:32:49.251471 2164 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:32:49.948810 kubelet[2164]: E0904 17:32:49.948770 2164 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 4 17:32:49.949278 kubelet[2164]: E0904 17:32:49.949257 2164 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:50.133342 kubelet[2164]: I0904 17:32:50.132980 2164 apiserver.go:52] "Watching apiserver" Sep 4 17:32:50.140815 kubelet[2164]: I0904 17:32:50.140644 2164 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:32:51.183919 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Sep 4 17:32:51.183937 systemd[1]: Reloading... Sep 4 17:32:51.250672 zram_generator::config[2482]: No configuration found. Sep 4 17:32:51.356133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:32:51.424272 systemd[1]: Reloading finished in 239 ms. Sep 4 17:32:51.465425 kubelet[2164]: I0904 17:32:51.465133 2164 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:32:51.465286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:51.474688 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:32:51.475664 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:51.475804 systemd[1]: kubelet.service: Consumed 1.367s CPU time, 116.0M memory peak, 0B memory swap peak. Sep 4 17:32:51.486457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:32:51.587123 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:32:51.591310 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:32:51.640201 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:32:51.640201 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:32:51.640201 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:32:51.640201 kubelet[2521]: I0904 17:32:51.640277 2521 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:32:51.650157 kubelet[2521]: I0904 17:32:51.650114 2521 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:32:51.650157 kubelet[2521]: I0904 17:32:51.650146 2521 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:32:51.650429 kubelet[2521]: I0904 17:32:51.650408 2521 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:32:51.652201 kubelet[2521]: I0904 17:32:51.652170 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:32:51.653541 kubelet[2521]: I0904 17:32:51.653508 2521 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:32:51.658833 kubelet[2521]: I0904 17:32:51.658793 2521 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:32:51.659033 kubelet[2521]: I0904 17:32:51.658996 2521 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:32:51.659612 kubelet[2521]: I0904 17:32:51.659023 2521 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:32:51.659612 kubelet[2521]: I0904 17:32:51.659315 2521 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:32:51.659612 kubelet[2521]: I0904 17:32:51.659328 2521 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:32:51.659612 kubelet[2521]: I0904 17:32:51.659365 2521 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:32:51.659612 kubelet[2521]: I0904 17:32:51.659481 2521 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:32:51.659822 kubelet[2521]: I0904 17:32:51.659492 2521 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:32:51.659822 kubelet[2521]: I0904 17:32:51.659542 2521 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:32:51.659822 kubelet[2521]: I0904 17:32:51.659562 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:32:51.660726 kubelet[2521]: I0904 17:32:51.660706 2521 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:32:51.660890 kubelet[2521]: I0904 17:32:51.660875 2521 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:32:51.661282 kubelet[2521]: I0904 17:32:51.661249 2521 server.go:1264] "Started kubelet" Sep 4 17:32:51.663252 kubelet[2521]: I0904 17:32:51.663225 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:32:51.663746 kubelet[2521]: I0904 17:32:51.663720 2521 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:32:51.664258 kubelet[2521]: I0904 17:32:51.664120 2521 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:32:51.664258 kubelet[2521]: I0904 17:32:51.664220 2521 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:32:51.664337 kubelet[2521]: I0904 17:32:51.664296 2521 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:32:51.665622 kubelet[2521]: I0904 17:32:51.665524 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:32:51.665877 kubelet[2521]: I0904 17:32:51.665842 2521 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:32:51.671453 kubelet[2521]: I0904 17:32:51.670863 2521 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:32:51.680521 kubelet[2521]: I0904 17:32:51.680479 2521 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:32:51.680521 kubelet[2521]: I0904 17:32:51.680505 2521 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:32:51.680695 kubelet[2521]: I0904 17:32:51.680614 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:32:51.704430 kubelet[2521]: I0904 17:32:51.704379 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:32:51.705416 kubelet[2521]: I0904 17:32:51.705391 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:32:51.705476 kubelet[2521]: I0904 17:32:51.705428 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:32:51.705476 kubelet[2521]: I0904 17:32:51.705447 2521 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:32:51.705525 kubelet[2521]: E0904 17:32:51.705488 2521 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:32:51.729569 kubelet[2521]: I0904 17:32:51.729472 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:32:51.729569 kubelet[2521]: I0904 17:32:51.729492 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:32:51.729569 kubelet[2521]: I0904 17:32:51.729510 2521 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:32:51.729729 kubelet[2521]: I0904 17:32:51.729691 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:32:51.729729 kubelet[2521]: I0904 17:32:51.729701 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:32:51.729729 kubelet[2521]: I0904 17:32:51.729718 2521 policy_none.go:49] "None policy: Start" Sep 4 17:32:51.731810 kubelet[2521]: I0904 17:32:51.731779 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:32:51.731810 kubelet[2521]: I0904 17:32:51.731812 2521 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:32:51.732042 kubelet[2521]: I0904 17:32:51.732012 2521 state_mem.go:75] "Updated machine memory state" Sep 4 17:32:51.736953 kubelet[2521]: I0904 17:32:51.736915 2521 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:32:51.737350 kubelet[2521]: I0904 17:32:51.737151 2521 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:32:51.737350 kubelet[2521]: I0904 17:32:51.737293 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:32:51.767615 kubelet[2521]: I0904 17:32:51.767557 2521 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Sep 4 17:32:51.798126 kubelet[2521]: I0904 17:32:51.798071 2521 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Sep 4 17:32:51.798274 kubelet[2521]: I0904 17:32:51.798221 2521 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Sep 4 17:32:51.806320 kubelet[2521]: I0904 17:32:51.806258 2521 topology_manager.go:215] "Topology Admit Handler" podUID="ab09c4a38f15561465451a45cd787c5b" podNamespace="kube-system" podName="kube-scheduler-localhost" Sep 4 17:32:51.806487 kubelet[2521]: I0904 17:32:51.806388 2521 topology_manager.go:215] "Topology Admit Handler" podUID="248a9470b4d6a24ba218b8c3517d29a0" podNamespace="kube-system" podName="kube-apiserver-localhost" Sep 4 17:32:51.806487 kubelet[2521]: I0904 17:32:51.806426 2521 topology_manager.go:215] "Topology Admit Handler" podUID="a75cc901e91bc66fd9615154dc537be7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Sep 4 17:32:51.965918 kubelet[2521]: I0904 17:32:51.965868 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/248a9470b4d6a24ba218b8c3517d29a0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"248a9470b4d6a24ba218b8c3517d29a0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:32:51.965918 kubelet[2521]: I0904 17:32:51.965910 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/248a9470b4d6a24ba218b8c3517d29a0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"248a9470b4d6a24ba218b8c3517d29a0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:32:51.966081 kubelet[2521]: I0904 17:32:51.965945 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/248a9470b4d6a24ba218b8c3517d29a0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"248a9470b4d6a24ba218b8c3517d29a0\") " pod="kube-system/kube-apiserver-localhost" Sep 4 17:32:51.966081 kubelet[2521]: I0904 17:32:51.965963 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:51.966081 kubelet[2521]: I0904 17:32:51.965981 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:51.966081 kubelet[2521]: I0904 17:32:51.965997 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:51.966081 kubelet[2521]: I0904 17:32:51.966014 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:51.966185 kubelet[2521]: I0904 17:32:51.966031 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ab09c4a38f15561465451a45cd787c5b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ab09c4a38f15561465451a45cd787c5b\") " pod="kube-system/kube-scheduler-localhost" Sep 4 17:32:51.966185 kubelet[2521]: I0904 17:32:51.966053 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a75cc901e91bc66fd9615154dc537be7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a75cc901e91bc66fd9615154dc537be7\") " pod="kube-system/kube-controller-manager-localhost" Sep 4 17:32:52.116186 kubelet[2521]: E0904 17:32:52.115987 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:52.116186 kubelet[2521]: E0904 17:32:52.116065 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:52.116186 kubelet[2521]: E0904 17:32:52.116110 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:52.661697 kubelet[2521]: I0904 17:32:52.661648 2521 apiserver.go:52] "Watching apiserver" Sep 4 17:32:52.664588 kubelet[2521]: I0904 17:32:52.664465 2521 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:32:52.707976 kubelet[2521]: I0904 17:32:52.707491 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.707476208 podStartE2EDuration="1.707476208s" podCreationTimestamp="2024-09-04 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:52.707455388 +0000 UTC m=+1.112576492" watchObservedRunningTime="2024-09-04 17:32:52.707476208 +0000 UTC m=+1.112597312" Sep 4 17:32:52.717539 kubelet[2521]: E0904 17:32:52.717434 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:52.717673 kubelet[2521]: E0904 17:32:52.717603 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:52.717861 kubelet[2521]: E0904 17:32:52.717840 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:52.747531 kubelet[2521]: I0904 17:32:52.747476 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7474579430000001 podStartE2EDuration="1.747457943s" podCreationTimestamp="2024-09-04 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:52.729218911 +0000 UTC m=+1.134340055" watchObservedRunningTime="2024-09-04 17:32:52.747457943 +0000 UTC m=+1.152579047" Sep 4 17:32:52.761426 kubelet[2521]: I0904 17:32:52.761243 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.76122483 podStartE2EDuration="1.76122483s" podCreationTimestamp="2024-09-04 17:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:32:52.748261502 +0000 UTC m=+1.153382606" watchObservedRunningTime="2024-09-04 17:32:52.76122483 +0000 UTC m=+1.166345935" Sep 4 17:32:53.719518 kubelet[2521]: E0904 17:32:53.719462 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:56.206444 kubelet[2521]: E0904 17:32:56.206371 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:56.494669 sudo[1614]: pam_unix(sudo:session): session closed for user root Sep 4 17:32:56.500484 sshd[1611]: pam_unix(sshd:session): session closed for user core Sep 4 17:32:56.503624 systemd-logind[1418]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:32:56.503759 systemd[1]: sshd@6-10.0.0.119:22-10.0.0.1:41158.service: Deactivated successfully. Sep 4 17:32:56.505679 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:32:56.506450 systemd[1]: session-7.scope: Consumed 6.417s CPU time, 136.4M memory peak, 0B memory swap peak. Sep 4 17:32:56.508213 systemd-logind[1418]: Removed session 7. Sep 4 17:32:56.647048 kubelet[2521]: E0904 17:32:56.646490 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:56.725140 kubelet[2521]: E0904 17:32:56.724702 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:56.725287 kubelet[2521]: E0904 17:32:56.725225 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:58.528935 kubelet[2521]: E0904 17:32:58.528894 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:32:58.728979 kubelet[2521]: E0904 17:32:58.728747 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:02.132674 update_engine[1422]: I0904 17:33:02.132620 1422 update_attempter.cc:509] Updating boot flags... Sep 4 17:33:02.160597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2622) Sep 4 17:33:02.198311 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2621) Sep 4 17:33:07.691328 kubelet[2521]: I0904 17:33:07.691294 2521 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:33:07.692518 containerd[1436]: time="2024-09-04T17:33:07.692469797Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:33:07.692913 kubelet[2521]: I0904 17:33:07.692698 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:33:08.689032 kubelet[2521]: I0904 17:33:08.688790 2521 topology_manager.go:215] "Topology Admit Handler" podUID="c8e2b64c-44cb-4a67-b41f-72d47fd64705" podNamespace="kube-system" podName="kube-proxy-hpdwb" Sep 4 17:33:08.701513 systemd[1]: Created slice kubepods-besteffort-podc8e2b64c_44cb_4a67_b41f_72d47fd64705.slice - libcontainer container kubepods-besteffort-podc8e2b64c_44cb_4a67_b41f_72d47fd64705.slice. Sep 4 17:33:08.747471 kubelet[2521]: I0904 17:33:08.747086 2521 topology_manager.go:215] "Topology Admit Handler" podUID="528010a4-addb-44be-bf51-b67a691243f3" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-kg592" Sep 4 17:33:08.756043 systemd[1]: Created slice kubepods-besteffort-pod528010a4_addb_44be_bf51_b67a691243f3.slice - libcontainer container kubepods-besteffort-pod528010a4_addb_44be_bf51_b67a691243f3.slice. Sep 4 17:33:08.780511 kubelet[2521]: I0904 17:33:08.780462 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/528010a4-addb-44be-bf51-b67a691243f3-var-lib-calico\") pod \"tigera-operator-77f994b5bb-kg592\" (UID: \"528010a4-addb-44be-bf51-b67a691243f3\") " pod="tigera-operator/tigera-operator-77f994b5bb-kg592" Sep 4 17:33:08.780511 kubelet[2521]: I0904 17:33:08.780507 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hv57\" (UniqueName: \"kubernetes.io/projected/c8e2b64c-44cb-4a67-b41f-72d47fd64705-kube-api-access-5hv57\") pod \"kube-proxy-hpdwb\" (UID: \"c8e2b64c-44cb-4a67-b41f-72d47fd64705\") " pod="kube-system/kube-proxy-hpdwb" Sep 4 17:33:08.780511 kubelet[2521]: I0904 17:33:08.780527 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8e2b64c-44cb-4a67-b41f-72d47fd64705-xtables-lock\") pod \"kube-proxy-hpdwb\" (UID: \"c8e2b64c-44cb-4a67-b41f-72d47fd64705\") " pod="kube-system/kube-proxy-hpdwb" Sep 4 17:33:08.780705 kubelet[2521]: I0904 17:33:08.780546 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c8e2b64c-44cb-4a67-b41f-72d47fd64705-kube-proxy\") pod \"kube-proxy-hpdwb\" (UID: \"c8e2b64c-44cb-4a67-b41f-72d47fd64705\") " pod="kube-system/kube-proxy-hpdwb" Sep 4 17:33:08.780705 kubelet[2521]: I0904 17:33:08.780562 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8e2b64c-44cb-4a67-b41f-72d47fd64705-lib-modules\") pod \"kube-proxy-hpdwb\" (UID: \"c8e2b64c-44cb-4a67-b41f-72d47fd64705\") " pod="kube-system/kube-proxy-hpdwb" Sep 4 17:33:08.780705 kubelet[2521]: I0904 17:33:08.780588 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt6t4\" (UniqueName: \"kubernetes.io/projected/528010a4-addb-44be-bf51-b67a691243f3-kube-api-access-gt6t4\") pod \"tigera-operator-77f994b5bb-kg592\" (UID: \"528010a4-addb-44be-bf51-b67a691243f3\") " pod="tigera-operator/tigera-operator-77f994b5bb-kg592" Sep 4 17:33:09.009699 kubelet[2521]: E0904 17:33:09.009259 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:09.010330 containerd[1436]: time="2024-09-04T17:33:09.010291114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpdwb,Uid:c8e2b64c-44cb-4a67-b41f-72d47fd64705,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:09.038237 containerd[1436]: time="2024-09-04T17:33:09.038140567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:09.038237 containerd[1436]: time="2024-09-04T17:33:09.038190667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:09.038237 containerd[1436]: time="2024-09-04T17:33:09.038207473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:09.038237 containerd[1436]: time="2024-09-04T17:33:09.038217157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:09.057307 systemd[1]: Started cri-containerd-765a5c2c500ebcb98347da1ac20bd022ac8acdf5e3b2de6ea64c1ad34a0563e6.scope - libcontainer container 765a5c2c500ebcb98347da1ac20bd022ac8acdf5e3b2de6ea64c1ad34a0563e6. Sep 4 17:33:09.062638 containerd[1436]: time="2024-09-04T17:33:09.062586278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-kg592,Uid:528010a4-addb-44be-bf51-b67a691243f3,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:33:09.082020 containerd[1436]: time="2024-09-04T17:33:09.081952348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hpdwb,Uid:c8e2b64c-44cb-4a67-b41f-72d47fd64705,Namespace:kube-system,Attempt:0,} returns sandbox id \"765a5c2c500ebcb98347da1ac20bd022ac8acdf5e3b2de6ea64c1ad34a0563e6\"" Sep 4 17:33:09.083060 kubelet[2521]: E0904 17:33:09.083022 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:09.086707 containerd[1436]: time="2024-09-04T17:33:09.086665765Z" level=info msg="CreateContainer within sandbox \"765a5c2c500ebcb98347da1ac20bd022ac8acdf5e3b2de6ea64c1ad34a0563e6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:33:09.089981 containerd[1436]: time="2024-09-04T17:33:09.089906202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:09.089981 containerd[1436]: time="2024-09-04T17:33:09.089955941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:09.089981 containerd[1436]: time="2024-09-04T17:33:09.089970587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:09.089981 containerd[1436]: time="2024-09-04T17:33:09.089980031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:09.102436 containerd[1436]: time="2024-09-04T17:33:09.102017774Z" level=info msg="CreateContainer within sandbox \"765a5c2c500ebcb98347da1ac20bd022ac8acdf5e3b2de6ea64c1ad34a0563e6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"76706dc73d973eeff052849175fe29a3721c03691ab7e22b1372c4f7b1436bb2\"" Sep 4 17:33:09.102784 containerd[1436]: time="2024-09-04T17:33:09.102759706Z" level=info msg="StartContainer for \"76706dc73d973eeff052849175fe29a3721c03691ab7e22b1372c4f7b1436bb2\"" Sep 4 17:33:09.104737 systemd[1]: Started cri-containerd-ba8bc3c0c55f69df8e8349a6e2728d19e8a58d636504a02d464803c1042a578a.scope - libcontainer container ba8bc3c0c55f69df8e8349a6e2728d19e8a58d636504a02d464803c1042a578a. Sep 4 17:33:09.134786 systemd[1]: Started cri-containerd-76706dc73d973eeff052849175fe29a3721c03691ab7e22b1372c4f7b1436bb2.scope - libcontainer container 76706dc73d973eeff052849175fe29a3721c03691ab7e22b1372c4f7b1436bb2. Sep 4 17:33:09.145993 containerd[1436]: time="2024-09-04T17:33:09.145952323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-kg592,Uid:528010a4-addb-44be-bf51-b67a691243f3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ba8bc3c0c55f69df8e8349a6e2728d19e8a58d636504a02d464803c1042a578a\"" Sep 4 17:33:09.149250 containerd[1436]: time="2024-09-04T17:33:09.149205045Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:33:09.166999 containerd[1436]: time="2024-09-04T17:33:09.166954638Z" level=info msg="StartContainer for \"76706dc73d973eeff052849175fe29a3721c03691ab7e22b1372c4f7b1436bb2\" returns successfully" Sep 4 17:33:09.749332 kubelet[2521]: E0904 17:33:09.749251 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:10.220772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3868639938.mount: Deactivated successfully. Sep 4 17:33:11.046584 containerd[1436]: time="2024-09-04T17:33:11.046505981Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:11.047273 containerd[1436]: time="2024-09-04T17:33:11.047156896Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485903" Sep 4 17:33:11.048193 containerd[1436]: time="2024-09-04T17:33:11.048156056Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:11.050308 containerd[1436]: time="2024-09-04T17:33:11.050263456Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:11.051747 containerd[1436]: time="2024-09-04T17:33:11.051669123Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 1.902412058s" Sep 4 17:33:11.051976 containerd[1436]: time="2024-09-04T17:33:11.051863033Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:33:11.058071 containerd[1436]: time="2024-09-04T17:33:11.058030537Z" level=info msg="CreateContainer within sandbox \"ba8bc3c0c55f69df8e8349a6e2728d19e8a58d636504a02d464803c1042a578a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:33:11.069200 containerd[1436]: time="2024-09-04T17:33:11.069143583Z" level=info msg="CreateContainer within sandbox \"ba8bc3c0c55f69df8e8349a6e2728d19e8a58d636504a02d464803c1042a578a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bb38117cff23ba9d670d7163e056e8fcba7bb61c9d33a9765ade7fbac2510e21\"" Sep 4 17:33:11.070496 containerd[1436]: time="2024-09-04T17:33:11.070178196Z" level=info msg="StartContainer for \"bb38117cff23ba9d670d7163e056e8fcba7bb61c9d33a9765ade7fbac2510e21\"" Sep 4 17:33:11.100811 systemd[1]: Started cri-containerd-bb38117cff23ba9d670d7163e056e8fcba7bb61c9d33a9765ade7fbac2510e21.scope - libcontainer container bb38117cff23ba9d670d7163e056e8fcba7bb61c9d33a9765ade7fbac2510e21. Sep 4 17:33:11.124051 containerd[1436]: time="2024-09-04T17:33:11.123995800Z" level=info msg="StartContainer for \"bb38117cff23ba9d670d7163e056e8fcba7bb61c9d33a9765ade7fbac2510e21\" returns successfully" Sep 4 17:33:11.764074 kubelet[2521]: I0904 17:33:11.763987 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-kg592" podStartSLOduration=1.855971005 podStartE2EDuration="3.76396982s" podCreationTimestamp="2024-09-04 17:33:08 +0000 UTC" firstStartedPulling="2024-09-04 17:33:09.148568514 +0000 UTC m=+17.553689578" lastFinishedPulling="2024-09-04 17:33:11.056567289 +0000 UTC m=+19.461688393" observedRunningTime="2024-09-04 17:33:11.763935687 +0000 UTC m=+20.169056791" watchObservedRunningTime="2024-09-04 17:33:11.76396982 +0000 UTC m=+20.169090924" Sep 4 17:33:11.764469 kubelet[2521]: I0904 17:33:11.764231 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hpdwb" podStartSLOduration=3.764224551 podStartE2EDuration="3.764224551s" podCreationTimestamp="2024-09-04 17:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:09.760242907 +0000 UTC m=+18.165364011" watchObservedRunningTime="2024-09-04 17:33:11.764224551 +0000 UTC m=+20.169345615" Sep 4 17:33:15.564610 kubelet[2521]: I0904 17:33:15.564512 2521 topology_manager.go:215] "Topology Admit Handler" podUID="95ef8fcd-b73a-46cc-ac22-4fef7200c664" podNamespace="calico-system" podName="calico-typha-5cfc4698d8-49w6j" Sep 4 17:33:15.587490 systemd[1]: Created slice kubepods-besteffort-pod95ef8fcd_b73a_46cc_ac22_4fef7200c664.slice - libcontainer container kubepods-besteffort-pod95ef8fcd_b73a_46cc_ac22_4fef7200c664.slice. Sep 4 17:33:15.628173 kubelet[2521]: I0904 17:33:15.628103 2521 topology_manager.go:215] "Topology Admit Handler" podUID="6f1c16c6-32e3-42ae-8822-d547eac641d4" podNamespace="calico-system" podName="calico-node-l6ntn" Sep 4 17:33:15.635698 kubelet[2521]: I0904 17:33:15.633550 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95ef8fcd-b73a-46cc-ac22-4fef7200c664-tigera-ca-bundle\") pod \"calico-typha-5cfc4698d8-49w6j\" (UID: \"95ef8fcd-b73a-46cc-ac22-4fef7200c664\") " pod="calico-system/calico-typha-5cfc4698d8-49w6j" Sep 4 17:33:15.637249 kubelet[2521]: I0904 17:33:15.637199 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/95ef8fcd-b73a-46cc-ac22-4fef7200c664-typha-certs\") pod \"calico-typha-5cfc4698d8-49w6j\" (UID: \"95ef8fcd-b73a-46cc-ac22-4fef7200c664\") " pod="calico-system/calico-typha-5cfc4698d8-49w6j" Sep 4 17:33:15.637677 kubelet[2521]: I0904 17:33:15.637368 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbr9x\" (UniqueName: \"kubernetes.io/projected/95ef8fcd-b73a-46cc-ac22-4fef7200c664-kube-api-access-tbr9x\") pod \"calico-typha-5cfc4698d8-49w6j\" (UID: \"95ef8fcd-b73a-46cc-ac22-4fef7200c664\") " pod="calico-system/calico-typha-5cfc4698d8-49w6j" Sep 4 17:33:15.645777 systemd[1]: Created slice kubepods-besteffort-pod6f1c16c6_32e3_42ae_8822_d547eac641d4.slice - libcontainer container kubepods-besteffort-pod6f1c16c6_32e3_42ae_8822_d547eac641d4.slice. Sep 4 17:33:15.738297 kubelet[2521]: I0904 17:33:15.738166 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-var-run-calico\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.738297 kubelet[2521]: I0904 17:33:15.738212 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-var-lib-calico\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.738297 kubelet[2521]: I0904 17:33:15.738232 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-cni-net-dir\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.739057 kubelet[2521]: I0904 17:33:15.738888 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-xtables-lock\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.739197 kubelet[2521]: I0904 17:33:15.739133 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f1c16c6-32e3-42ae-8822-d547eac641d4-tigera-ca-bundle\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.740614 kubelet[2521]: I0904 17:33:15.739715 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-lib-modules\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.741100 kubelet[2521]: I0904 17:33:15.740716 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-policysync\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.741100 kubelet[2521]: I0904 17:33:15.740751 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twnx\" (UniqueName: \"kubernetes.io/projected/6f1c16c6-32e3-42ae-8822-d547eac641d4-kube-api-access-5twnx\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.741100 kubelet[2521]: I0904 17:33:15.740775 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-cni-bin-dir\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.741100 kubelet[2521]: I0904 17:33:15.740790 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-flexvol-driver-host\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.741100 kubelet[2521]: I0904 17:33:15.740811 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6f1c16c6-32e3-42ae-8822-d547eac641d4-node-certs\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.741232 kubelet[2521]: I0904 17:33:15.740826 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6f1c16c6-32e3-42ae-8822-d547eac641d4-cni-log-dir\") pod \"calico-node-l6ntn\" (UID: \"6f1c16c6-32e3-42ae-8822-d547eac641d4\") " pod="calico-system/calico-node-l6ntn" Sep 4 17:33:15.750890 kubelet[2521]: I0904 17:33:15.750840 2521 topology_manager.go:215] "Topology Admit Handler" podUID="c22bd57a-c877-45b8-9da9-64ec19c9aeb3" podNamespace="calico-system" podName="csi-node-driver-dwclf" Sep 4 17:33:15.753312 kubelet[2521]: E0904 17:33:15.753111 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwclf" podUID="c22bd57a-c877-45b8-9da9-64ec19c9aeb3" Sep 4 17:33:15.843261 kubelet[2521]: I0904 17:33:15.841600 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c22bd57a-c877-45b8-9da9-64ec19c9aeb3-kubelet-dir\") pod \"csi-node-driver-dwclf\" (UID: \"c22bd57a-c877-45b8-9da9-64ec19c9aeb3\") " pod="calico-system/csi-node-driver-dwclf" Sep 4 17:33:15.843261 kubelet[2521]: I0904 17:33:15.841648 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8xpm\" (UniqueName: \"kubernetes.io/projected/c22bd57a-c877-45b8-9da9-64ec19c9aeb3-kube-api-access-w8xpm\") pod \"csi-node-driver-dwclf\" (UID: \"c22bd57a-c877-45b8-9da9-64ec19c9aeb3\") " pod="calico-system/csi-node-driver-dwclf" Sep 4 17:33:15.843261 kubelet[2521]: I0904 17:33:15.841677 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c22bd57a-c877-45b8-9da9-64ec19c9aeb3-socket-dir\") pod \"csi-node-driver-dwclf\" (UID: \"c22bd57a-c877-45b8-9da9-64ec19c9aeb3\") " pod="calico-system/csi-node-driver-dwclf" Sep 4 17:33:15.843261 kubelet[2521]: I0904 17:33:15.841706 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c22bd57a-c877-45b8-9da9-64ec19c9aeb3-varrun\") pod \"csi-node-driver-dwclf\" (UID: \"c22bd57a-c877-45b8-9da9-64ec19c9aeb3\") " pod="calico-system/csi-node-driver-dwclf" Sep 4 17:33:15.843261 kubelet[2521]: I0904 17:33:15.841747 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c22bd57a-c877-45b8-9da9-64ec19c9aeb3-registration-dir\") pod \"csi-node-driver-dwclf\" (UID: \"c22bd57a-c877-45b8-9da9-64ec19c9aeb3\") " pod="calico-system/csi-node-driver-dwclf" Sep 4 17:33:15.844462 kubelet[2521]: E0904 17:33:15.844305 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.844650 kubelet[2521]: W0904 17:33:15.844629 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.844844 kubelet[2521]: E0904 17:33:15.844826 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.850937 kubelet[2521]: E0904 17:33:15.850820 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.851162 kubelet[2521]: W0904 17:33:15.851139 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.851263 kubelet[2521]: E0904 17:33:15.851233 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.856755 kubelet[2521]: E0904 17:33:15.856723 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.856898 kubelet[2521]: W0904 17:33:15.856879 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.856967 kubelet[2521]: E0904 17:33:15.856954 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.894361 kubelet[2521]: E0904 17:33:15.894318 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:15.895221 containerd[1436]: time="2024-09-04T17:33:15.895158383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cfc4698d8-49w6j,Uid:95ef8fcd-b73a-46cc-ac22-4fef7200c664,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:15.926612 containerd[1436]: time="2024-09-04T17:33:15.926494350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:15.926612 containerd[1436]: time="2024-09-04T17:33:15.926551928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:15.926612 containerd[1436]: time="2024-09-04T17:33:15.926569253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:15.926612 containerd[1436]: time="2024-09-04T17:33:15.926604424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:15.942791 kubelet[2521]: E0904 17:33:15.942710 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.942791 kubelet[2521]: W0904 17:33:15.942734 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.942791 kubelet[2521]: E0904 17:33:15.942754 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.943038 kubelet[2521]: E0904 17:33:15.942934 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.943038 kubelet[2521]: W0904 17:33:15.942943 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.943038 kubelet[2521]: E0904 17:33:15.942953 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.943599 kubelet[2521]: E0904 17:33:15.943114 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.943599 kubelet[2521]: W0904 17:33:15.943133 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.943599 kubelet[2521]: E0904 17:33:15.943142 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.944352 kubelet[2521]: E0904 17:33:15.944319 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.944352 kubelet[2521]: W0904 17:33:15.944344 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.944433 kubelet[2521]: E0904 17:33:15.944364 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.944653 kubelet[2521]: E0904 17:33:15.944633 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.944653 kubelet[2521]: W0904 17:33:15.944646 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.944807 kubelet[2521]: E0904 17:33:15.944717 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.944841 kubelet[2521]: E0904 17:33:15.944810 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.944841 kubelet[2521]: W0904 17:33:15.944819 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.944899 kubelet[2521]: E0904 17:33:15.944886 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.945002 kubelet[2521]: E0904 17:33:15.944992 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.945024 kubelet[2521]: W0904 17:33:15.945003 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.945054 kubelet[2521]: E0904 17:33:15.945042 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.945216 kubelet[2521]: E0904 17:33:15.945197 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.945216 kubelet[2521]: W0904 17:33:15.945209 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.945270 kubelet[2521]: E0904 17:33:15.945248 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.945402 kubelet[2521]: E0904 17:33:15.945390 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.945431 kubelet[2521]: W0904 17:33:15.945402 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.945431 kubelet[2521]: E0904 17:33:15.945411 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.945642 kubelet[2521]: E0904 17:33:15.945626 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.945642 kubelet[2521]: W0904 17:33:15.945641 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.945816 kubelet[2521]: E0904 17:33:15.945799 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.945900 kubelet[2521]: E0904 17:33:15.945885 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.945952 kubelet[2521]: W0904 17:33:15.945898 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.945952 kubelet[2521]: E0904 17:33:15.945947 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.946202 kubelet[2521]: E0904 17:33:15.946187 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.946236 kubelet[2521]: W0904 17:33:15.946202 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.946264 kubelet[2521]: E0904 17:33:15.946239 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.946380 kubelet[2521]: E0904 17:33:15.946369 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.946380 kubelet[2521]: W0904 17:33:15.946379 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.946471 kubelet[2521]: E0904 17:33:15.946454 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.946528 kubelet[2521]: E0904 17:33:15.946517 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.946528 kubelet[2521]: W0904 17:33:15.946526 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.946615 kubelet[2521]: E0904 17:33:15.946603 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.946677 kubelet[2521]: E0904 17:33:15.946667 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.946677 kubelet[2521]: W0904 17:33:15.946676 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.946760 kubelet[2521]: E0904 17:33:15.946748 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.946824 kubelet[2521]: E0904 17:33:15.946809 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.946824 kubelet[2521]: W0904 17:33:15.946818 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.946878 kubelet[2521]: E0904 17:33:15.946826 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.947006 kubelet[2521]: E0904 17:33:15.946986 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.947006 kubelet[2521]: W0904 17:33:15.946998 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.947057 kubelet[2521]: E0904 17:33:15.947007 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.947303 kubelet[2521]: E0904 17:33:15.947284 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.947345 kubelet[2521]: W0904 17:33:15.947311 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.947345 kubelet[2521]: E0904 17:33:15.947329 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.948241 kubelet[2521]: E0904 17:33:15.947618 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.948241 kubelet[2521]: W0904 17:33:15.947636 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.948241 kubelet[2521]: E0904 17:33:15.947672 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.948241 kubelet[2521]: E0904 17:33:15.947877 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.948241 kubelet[2521]: W0904 17:33:15.947986 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.948241 kubelet[2521]: E0904 17:33:15.948039 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.948603 kubelet[2521]: E0904 17:33:15.948273 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.948603 kubelet[2521]: W0904 17:33:15.948286 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.948603 kubelet[2521]: E0904 17:33:15.948312 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.948684 kubelet[2521]: E0904 17:33:15.948662 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.948684 kubelet[2521]: W0904 17:33:15.948675 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.948984 kubelet[2521]: E0904 17:33:15.948810 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.949033 kubelet[2521]: E0904 17:33:15.949023 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.949033 kubelet[2521]: W0904 17:33:15.949032 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.949129 kubelet[2521]: E0904 17:33:15.949044 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.949801 kubelet[2521]: E0904 17:33:15.949771 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.949801 kubelet[2521]: W0904 17:33:15.949794 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.949904 kubelet[2521]: E0904 17:33:15.949814 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.949905 systemd[1]: Started cri-containerd-f2df0481c0fcd0cd6c910786979c03b5d1eb42a983a8dba435e4d86860c05173.scope - libcontainer container f2df0481c0fcd0cd6c910786979c03b5d1eb42a983a8dba435e4d86860c05173. Sep 4 17:33:15.950590 kubelet[2521]: E0904 17:33:15.950552 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:15.951653 containerd[1436]: time="2024-09-04T17:33:15.951618021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6ntn,Uid:6f1c16c6-32e3-42ae-8822-d547eac641d4,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:15.952549 kubelet[2521]: E0904 17:33:15.952522 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.952549 kubelet[2521]: W0904 17:33:15.952542 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.952799 kubelet[2521]: E0904 17:33:15.952563 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:15.970246 kubelet[2521]: E0904 17:33:15.970142 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:15.970246 kubelet[2521]: W0904 17:33:15.970182 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:15.970246 kubelet[2521]: E0904 17:33:15.970201 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:16.007832 containerd[1436]: time="2024-09-04T17:33:16.007765851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cfc4698d8-49w6j,Uid:95ef8fcd-b73a-46cc-ac22-4fef7200c664,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2df0481c0fcd0cd6c910786979c03b5d1eb42a983a8dba435e4d86860c05173\"" Sep 4 17:33:16.008807 kubelet[2521]: E0904 17:33:16.008540 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:16.018507 containerd[1436]: time="2024-09-04T17:33:16.018454829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:33:16.036297 containerd[1436]: time="2024-09-04T17:33:16.036047154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:16.036297 containerd[1436]: time="2024-09-04T17:33:16.036118295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:16.036297 containerd[1436]: time="2024-09-04T17:33:16.036132899Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:16.036297 containerd[1436]: time="2024-09-04T17:33:16.036142662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:16.064800 systemd[1]: Started cri-containerd-3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086.scope - libcontainer container 3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086. Sep 4 17:33:16.100037 containerd[1436]: time="2024-09-04T17:33:16.099907583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l6ntn,Uid:6f1c16c6-32e3-42ae-8822-d547eac641d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086\"" Sep 4 17:33:16.101721 kubelet[2521]: E0904 17:33:16.100650 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:17.445708 containerd[1436]: time="2024-09-04T17:33:17.445645828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:17.446292 containerd[1436]: time="2024-09-04T17:33:17.446237075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:33:17.447310 containerd[1436]: time="2024-09-04T17:33:17.447253082Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:17.449722 containerd[1436]: time="2024-09-04T17:33:17.449674047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:17.450395 containerd[1436]: time="2024-09-04T17:33:17.450353479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.431843794s" Sep 4 17:33:17.450395 containerd[1436]: time="2024-09-04T17:33:17.450390729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:33:17.454923 containerd[1436]: time="2024-09-04T17:33:17.454879718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:33:17.471270 containerd[1436]: time="2024-09-04T17:33:17.471224137Z" level=info msg="CreateContainer within sandbox \"f2df0481c0fcd0cd6c910786979c03b5d1eb42a983a8dba435e4d86860c05173\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:33:17.484967 containerd[1436]: time="2024-09-04T17:33:17.484886638Z" level=info msg="CreateContainer within sandbox \"f2df0481c0fcd0cd6c910786979c03b5d1eb42a983a8dba435e4d86860c05173\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a193c81804a99ea37cb92d6a02cb87dea6048987c288f44ca3dc48eb951499ec\"" Sep 4 17:33:17.490704 containerd[1436]: time="2024-09-04T17:33:17.490054658Z" level=info msg="StartContainer for \"a193c81804a99ea37cb92d6a02cb87dea6048987c288f44ca3dc48eb951499ec\"" Sep 4 17:33:17.523831 systemd[1]: Started cri-containerd-a193c81804a99ea37cb92d6a02cb87dea6048987c288f44ca3dc48eb951499ec.scope - libcontainer container a193c81804a99ea37cb92d6a02cb87dea6048987c288f44ca3dc48eb951499ec. Sep 4 17:33:17.569942 containerd[1436]: time="2024-09-04T17:33:17.569888459Z" level=info msg="StartContainer for \"a193c81804a99ea37cb92d6a02cb87dea6048987c288f44ca3dc48eb951499ec\" returns successfully" Sep 4 17:33:17.707767 kubelet[2521]: E0904 17:33:17.706847 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwclf" podUID="c22bd57a-c877-45b8-9da9-64ec19c9aeb3" Sep 4 17:33:17.777411 kubelet[2521]: E0904 17:33:17.776845 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:17.839797 kubelet[2521]: E0904 17:33:17.839760 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.839797 kubelet[2521]: W0904 17:33:17.839784 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.839797 kubelet[2521]: E0904 17:33:17.839807 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.840344 kubelet[2521]: E0904 17:33:17.840052 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.840344 kubelet[2521]: W0904 17:33:17.840061 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.840344 kubelet[2521]: E0904 17:33:17.840071 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.840344 kubelet[2521]: E0904 17:33:17.840266 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.840344 kubelet[2521]: W0904 17:33:17.840275 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.840344 kubelet[2521]: E0904 17:33:17.840285 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.840508 kubelet[2521]: E0904 17:33:17.840446 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.840508 kubelet[2521]: W0904 17:33:17.840454 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.840508 kubelet[2521]: E0904 17:33:17.840463 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.840647 kubelet[2521]: E0904 17:33:17.840634 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.841088 kubelet[2521]: W0904 17:33:17.840649 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.841088 kubelet[2521]: E0904 17:33:17.840661 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.841088 kubelet[2521]: E0904 17:33:17.840824 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.841088 kubelet[2521]: W0904 17:33:17.840833 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.841088 kubelet[2521]: E0904 17:33:17.840841 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.841088 kubelet[2521]: E0904 17:33:17.841038 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.841088 kubelet[2521]: W0904 17:33:17.841045 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.841088 kubelet[2521]: E0904 17:33:17.841053 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.841281 kubelet[2521]: E0904 17:33:17.841191 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.841281 kubelet[2521]: W0904 17:33:17.841200 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.841281 kubelet[2521]: E0904 17:33:17.841209 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.841827 kubelet[2521]: E0904 17:33:17.841358 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.841827 kubelet[2521]: W0904 17:33:17.841369 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.841827 kubelet[2521]: E0904 17:33:17.841376 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.841827 kubelet[2521]: E0904 17:33:17.841549 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.841827 kubelet[2521]: W0904 17:33:17.841555 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.841827 kubelet[2521]: E0904 17:33:17.841562 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.841827 kubelet[2521]: E0904 17:33:17.841711 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.841827 kubelet[2521]: W0904 17:33:17.841717 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.841827 kubelet[2521]: E0904 17:33:17.841725 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.842098 kubelet[2521]: E0904 17:33:17.841912 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.842098 kubelet[2521]: W0904 17:33:17.841933 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.842098 kubelet[2521]: E0904 17:33:17.841944 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.842167 kubelet[2521]: E0904 17:33:17.842129 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.842167 kubelet[2521]: W0904 17:33:17.842138 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.842167 kubelet[2521]: E0904 17:33:17.842147 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.842308 kubelet[2521]: E0904 17:33:17.842295 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.842308 kubelet[2521]: W0904 17:33:17.842306 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.842358 kubelet[2521]: E0904 17:33:17.842314 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.842453 kubelet[2521]: E0904 17:33:17.842431 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.842453 kubelet[2521]: W0904 17:33:17.842441 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.842453 kubelet[2521]: E0904 17:33:17.842448 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.860955 kubelet[2521]: E0904 17:33:17.860912 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.860955 kubelet[2521]: W0904 17:33:17.860942 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.860955 kubelet[2521]: E0904 17:33:17.860962 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.861228 kubelet[2521]: E0904 17:33:17.861190 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.861228 kubelet[2521]: W0904 17:33:17.861206 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.861284 kubelet[2521]: E0904 17:33:17.861232 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.861780 kubelet[2521]: E0904 17:33:17.861763 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.861829 kubelet[2521]: W0904 17:33:17.861780 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.861909 kubelet[2521]: E0904 17:33:17.861801 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.862022 kubelet[2521]: E0904 17:33:17.862011 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.862022 kubelet[2521]: W0904 17:33:17.862022 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.862081 kubelet[2521]: E0904 17:33:17.862032 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.862211 kubelet[2521]: E0904 17:33:17.862200 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.862244 kubelet[2521]: W0904 17:33:17.862211 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.862244 kubelet[2521]: E0904 17:33:17.862220 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.862384 kubelet[2521]: E0904 17:33:17.862373 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.862384 kubelet[2521]: W0904 17:33:17.862384 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.862439 kubelet[2521]: E0904 17:33:17.862396 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.862580 kubelet[2521]: E0904 17:33:17.862551 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.862580 kubelet[2521]: W0904 17:33:17.862579 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.862661 kubelet[2521]: E0904 17:33:17.862648 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.863247 kubelet[2521]: E0904 17:33:17.863231 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.863284 kubelet[2521]: W0904 17:33:17.863246 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.863327 kubelet[2521]: E0904 17:33:17.863305 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.863432 kubelet[2521]: E0904 17:33:17.863421 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.863464 kubelet[2521]: W0904 17:33:17.863433 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.863489 kubelet[2521]: E0904 17:33:17.863476 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.863633 kubelet[2521]: E0904 17:33:17.863620 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.863633 kubelet[2521]: W0904 17:33:17.863629 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.863699 kubelet[2521]: E0904 17:33:17.863648 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.863840 kubelet[2521]: E0904 17:33:17.863826 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.863840 kubelet[2521]: W0904 17:33:17.863837 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.864075 kubelet[2521]: E0904 17:33:17.863849 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.864075 kubelet[2521]: E0904 17:33:17.864018 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.864075 kubelet[2521]: W0904 17:33:17.864026 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.864153 kubelet[2521]: E0904 17:33:17.864104 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.864153 kubelet[2521]: E0904 17:33:17.864193 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.864279 kubelet[2521]: W0904 17:33:17.864203 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.864279 kubelet[2521]: E0904 17:33:17.864214 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.864363 kubelet[2521]: E0904 17:33:17.864347 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.864391 kubelet[2521]: W0904 17:33:17.864368 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.864391 kubelet[2521]: E0904 17:33:17.864379 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.864562 kubelet[2521]: E0904 17:33:17.864550 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.864614 kubelet[2521]: W0904 17:33:17.864561 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.864614 kubelet[2521]: E0904 17:33:17.864587 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.864919 kubelet[2521]: E0904 17:33:17.864905 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.864919 kubelet[2521]: W0904 17:33:17.864918 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.864980 kubelet[2521]: E0904 17:33:17.864939 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.865253 kubelet[2521]: E0904 17:33:17.865222 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.865253 kubelet[2521]: W0904 17:33:17.865235 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.865253 kubelet[2521]: E0904 17:33:17.865249 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:17.865430 kubelet[2521]: E0904 17:33:17.865417 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:33:17.865457 kubelet[2521]: W0904 17:33:17.865427 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:33:17.865457 kubelet[2521]: E0904 17:33:17.865441 2521 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:33:18.637423 containerd[1436]: time="2024-09-04T17:33:18.637295271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:18.638137 containerd[1436]: time="2024-09-04T17:33:18.638105852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:33:18.639214 containerd[1436]: time="2024-09-04T17:33:18.639118968Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:18.642160 containerd[1436]: time="2024-09-04T17:33:18.641982307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:18.644383 containerd[1436]: time="2024-09-04T17:33:18.644304220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.189372887s" Sep 4 17:33:18.644383 containerd[1436]: time="2024-09-04T17:33:18.644377920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:33:18.647306 containerd[1436]: time="2024-09-04T17:33:18.647258744Z" level=info msg="CreateContainer within sandbox \"3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:33:18.672717 containerd[1436]: time="2024-09-04T17:33:18.672654659Z" level=info msg="CreateContainer within sandbox \"3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876\"" Sep 4 17:33:18.673345 containerd[1436]: time="2024-09-04T17:33:18.673217333Z" level=info msg="StartContainer for \"ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876\"" Sep 4 17:33:18.706808 systemd[1]: Started cri-containerd-ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876.scope - libcontainer container ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876. Sep 4 17:33:18.748167 containerd[1436]: time="2024-09-04T17:33:18.748107165Z" level=info msg="StartContainer for \"ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876\" returns successfully" Sep 4 17:33:18.788558 kubelet[2521]: E0904 17:33:18.788392 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:18.789458 systemd[1]: cri-containerd-ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876.scope: Deactivated successfully. Sep 4 17:33:18.792760 kubelet[2521]: I0904 17:33:18.790793 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:33:18.792932 kubelet[2521]: E0904 17:33:18.792343 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:18.805776 kubelet[2521]: I0904 17:33:18.805686 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cfc4698d8-49w6j" podStartSLOduration=2.360525428 podStartE2EDuration="3.805669839s" podCreationTimestamp="2024-09-04 17:33:15 +0000 UTC" firstStartedPulling="2024-09-04 17:33:16.009468951 +0000 UTC m=+24.414590055" lastFinishedPulling="2024-09-04 17:33:17.454613362 +0000 UTC m=+25.859734466" observedRunningTime="2024-09-04 17:33:17.787605267 +0000 UTC m=+26.192726411" watchObservedRunningTime="2024-09-04 17:33:18.805669839 +0000 UTC m=+27.210790943" Sep 4 17:33:18.834972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876-rootfs.mount: Deactivated successfully. Sep 4 17:33:18.930656 containerd[1436]: time="2024-09-04T17:33:18.930255564Z" level=info msg="shim disconnected" id=ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876 namespace=k8s.io Sep 4 17:33:18.930656 containerd[1436]: time="2024-09-04T17:33:18.930329184Z" level=warning msg="cleaning up after shim disconnected" id=ab29e1591b4e7a69daa5a68d07b4d6f276fb4509a498e5cec2c0d87bb64bf876 namespace=k8s.io Sep 4 17:33:18.930656 containerd[1436]: time="2024-09-04T17:33:18.930339067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:33:19.706521 kubelet[2521]: E0904 17:33:19.706461 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwclf" podUID="c22bd57a-c877-45b8-9da9-64ec19c9aeb3" Sep 4 17:33:19.784436 kubelet[2521]: E0904 17:33:19.784036 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:19.785809 containerd[1436]: time="2024-09-04T17:33:19.785762584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:33:21.706588 kubelet[2521]: E0904 17:33:21.706153 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dwclf" podUID="c22bd57a-c877-45b8-9da9-64ec19c9aeb3" Sep 4 17:33:22.004138 containerd[1436]: time="2024-09-04T17:33:22.003290619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:22.004138 containerd[1436]: time="2024-09-04T17:33:22.003802140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:33:22.004961 containerd[1436]: time="2024-09-04T17:33:22.004902241Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:22.008421 containerd[1436]: time="2024-09-04T17:33:22.008365702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:22.009257 containerd[1436]: time="2024-09-04T17:33:22.009218304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.223408909s" Sep 4 17:33:22.009316 containerd[1436]: time="2024-09-04T17:33:22.009269396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:33:22.012241 containerd[1436]: time="2024-09-04T17:33:22.012202092Z" level=info msg="CreateContainer within sandbox \"3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:33:22.031013 containerd[1436]: time="2024-09-04T17:33:22.030948897Z" level=info msg="CreateContainer within sandbox \"3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b\"" Sep 4 17:33:22.032278 containerd[1436]: time="2024-09-04T17:33:22.031815743Z" level=info msg="StartContainer for \"39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b\"" Sep 4 17:33:22.071789 systemd[1]: Started cri-containerd-39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b.scope - libcontainer container 39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b. Sep 4 17:33:22.172498 containerd[1436]: time="2024-09-04T17:33:22.172437490Z" level=info msg="StartContainer for \"39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b\" returns successfully" Sep 4 17:33:22.669532 systemd[1]: cri-containerd-39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b.scope: Deactivated successfully. Sep 4 17:33:22.689010 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b-rootfs.mount: Deactivated successfully. Sep 4 17:33:22.708804 kubelet[2521]: I0904 17:33:22.708505 2521 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:33:22.733941 containerd[1436]: time="2024-09-04T17:33:22.733840780Z" level=info msg="shim disconnected" id=39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b namespace=k8s.io Sep 4 17:33:22.733941 containerd[1436]: time="2024-09-04T17:33:22.733932001Z" level=warning msg="cleaning up after shim disconnected" id=39bb4519d986b32ccde87bf1b037a21afba19c3b5475e0cd655d4cfc552aff8b namespace=k8s.io Sep 4 17:33:22.733941 containerd[1436]: time="2024-09-04T17:33:22.733942444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:33:22.748225 kubelet[2521]: I0904 17:33:22.747884 2521 topology_manager.go:215] "Topology Admit Handler" podUID="8d8af27a-0105-4c20-9900-7445cf2f532f" podNamespace="calico-system" podName="calico-kube-controllers-545cd5b56d-dcn5l" Sep 4 17:33:22.751655 kubelet[2521]: I0904 17:33:22.750713 2521 topology_manager.go:215] "Topology Admit Handler" podUID="ea3fe696-b784-45a8-b851-82c8d5cb101a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rvpjc" Sep 4 17:33:22.752928 kubelet[2521]: I0904 17:33:22.752864 2521 topology_manager.go:215] "Topology Admit Handler" podUID="a2630a53-8a70-4d90-bf19-8204dcfa5313" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nkntq" Sep 4 17:33:22.762148 systemd[1]: Created slice kubepods-besteffort-pod8d8af27a_0105_4c20_9900_7445cf2f532f.slice - libcontainer container kubepods-besteffort-pod8d8af27a_0105_4c20_9900_7445cf2f532f.slice. Sep 4 17:33:22.772714 systemd[1]: Created slice kubepods-burstable-podea3fe696_b784_45a8_b851_82c8d5cb101a.slice - libcontainer container kubepods-burstable-podea3fe696_b784_45a8_b851_82c8d5cb101a.slice. Sep 4 17:33:22.780416 systemd[1]: Created slice kubepods-burstable-poda2630a53_8a70_4d90_bf19_8204dcfa5313.slice - libcontainer container kubepods-burstable-poda2630a53_8a70_4d90_bf19_8204dcfa5313.slice. Sep 4 17:33:22.798264 kubelet[2521]: I0904 17:33:22.798211 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea3fe696-b784-45a8-b851-82c8d5cb101a-config-volume\") pod \"coredns-7db6d8ff4d-rvpjc\" (UID: \"ea3fe696-b784-45a8-b851-82c8d5cb101a\") " pod="kube-system/coredns-7db6d8ff4d-rvpjc" Sep 4 17:33:22.798264 kubelet[2521]: I0904 17:33:22.798262 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2630a53-8a70-4d90-bf19-8204dcfa5313-config-volume\") pod \"coredns-7db6d8ff4d-nkntq\" (UID: \"a2630a53-8a70-4d90-bf19-8204dcfa5313\") " pod="kube-system/coredns-7db6d8ff4d-nkntq" Sep 4 17:33:22.798427 kubelet[2521]: I0904 17:33:22.798283 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d8af27a-0105-4c20-9900-7445cf2f532f-tigera-ca-bundle\") pod \"calico-kube-controllers-545cd5b56d-dcn5l\" (UID: \"8d8af27a-0105-4c20-9900-7445cf2f532f\") " pod="calico-system/calico-kube-controllers-545cd5b56d-dcn5l" Sep 4 17:33:22.798427 kubelet[2521]: I0904 17:33:22.798306 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4d9c\" (UniqueName: \"kubernetes.io/projected/a2630a53-8a70-4d90-bf19-8204dcfa5313-kube-api-access-h4d9c\") pod \"coredns-7db6d8ff4d-nkntq\" (UID: \"a2630a53-8a70-4d90-bf19-8204dcfa5313\") " pod="kube-system/coredns-7db6d8ff4d-nkntq" Sep 4 17:33:22.798427 kubelet[2521]: I0904 17:33:22.798327 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52frp\" (UniqueName: \"kubernetes.io/projected/ea3fe696-b784-45a8-b851-82c8d5cb101a-kube-api-access-52frp\") pod \"coredns-7db6d8ff4d-rvpjc\" (UID: \"ea3fe696-b784-45a8-b851-82c8d5cb101a\") " pod="kube-system/coredns-7db6d8ff4d-rvpjc" Sep 4 17:33:22.798427 kubelet[2521]: I0904 17:33:22.798343 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktcn\" (UniqueName: \"kubernetes.io/projected/8d8af27a-0105-4c20-9900-7445cf2f532f-kube-api-access-fktcn\") pod \"calico-kube-controllers-545cd5b56d-dcn5l\" (UID: \"8d8af27a-0105-4c20-9900-7445cf2f532f\") " pod="calico-system/calico-kube-controllers-545cd5b56d-dcn5l" Sep 4 17:33:22.807770 kubelet[2521]: E0904 17:33:22.807740 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:22.808517 containerd[1436]: time="2024-09-04T17:33:22.808482320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:33:23.068835 containerd[1436]: time="2024-09-04T17:33:23.068722127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545cd5b56d-dcn5l,Uid:8d8af27a-0105-4c20-9900-7445cf2f532f,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:23.081274 kubelet[2521]: E0904 17:33:23.078032 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:23.081405 containerd[1436]: time="2024-09-04T17:33:23.078724905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rvpjc,Uid:ea3fe696-b784-45a8-b851-82c8d5cb101a,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:23.088721 kubelet[2521]: E0904 17:33:23.088424 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:23.088956 containerd[1436]: time="2024-09-04T17:33:23.088917726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nkntq,Uid:a2630a53-8a70-4d90-bf19-8204dcfa5313,Namespace:kube-system,Attempt:0,}" Sep 4 17:33:23.413326 containerd[1436]: time="2024-09-04T17:33:23.413233373Z" level=error msg="Failed to destroy network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.417887 containerd[1436]: time="2024-09-04T17:33:23.417770816Z" level=error msg="Failed to destroy network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.419298 containerd[1436]: time="2024-09-04T17:33:23.418288814Z" level=error msg="encountered an error cleaning up failed sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.419298 containerd[1436]: time="2024-09-04T17:33:23.418357030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nkntq,Uid:a2630a53-8a70-4d90-bf19-8204dcfa5313,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.427588 kubelet[2521]: E0904 17:33:23.425769 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.427588 kubelet[2521]: E0904 17:33:23.426922 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nkntq" Sep 4 17:33:23.427588 kubelet[2521]: E0904 17:33:23.426954 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nkntq" Sep 4 17:33:23.427778 containerd[1436]: time="2024-09-04T17:33:23.426100769Z" level=error msg="encountered an error cleaning up failed sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.427814 kubelet[2521]: E0904 17:33:23.427006 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nkntq_kube-system(a2630a53-8a70-4d90-bf19-8204dcfa5313)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nkntq_kube-system(a2630a53-8a70-4d90-bf19-8204dcfa5313)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nkntq" podUID="a2630a53-8a70-4d90-bf19-8204dcfa5313" Sep 4 17:33:23.428725 containerd[1436]: time="2024-09-04T17:33:23.428652795Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rvpjc,Uid:ea3fe696-b784-45a8-b851-82c8d5cb101a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.429127 kubelet[2521]: E0904 17:33:23.428929 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.429287 kubelet[2521]: E0904 17:33:23.429265 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rvpjc" Sep 4 17:33:23.429474 kubelet[2521]: E0904 17:33:23.429357 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rvpjc" Sep 4 17:33:23.429474 kubelet[2521]: E0904 17:33:23.429441 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rvpjc_kube-system(ea3fe696-b784-45a8-b851-82c8d5cb101a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rvpjc_kube-system(ea3fe696-b784-45a8-b851-82c8d5cb101a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rvpjc" podUID="ea3fe696-b784-45a8-b851-82c8d5cb101a" Sep 4 17:33:23.433984 containerd[1436]: time="2024-09-04T17:33:23.433912683Z" level=error msg="Failed to destroy network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.434545 containerd[1436]: time="2024-09-04T17:33:23.434292850Z" level=error msg="encountered an error cleaning up failed sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.434545 containerd[1436]: time="2024-09-04T17:33:23.434351144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545cd5b56d-dcn5l,Uid:8d8af27a-0105-4c20-9900-7445cf2f532f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.434674 kubelet[2521]: E0904 17:33:23.434552 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.434674 kubelet[2521]: E0904 17:33:23.434661 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-545cd5b56d-dcn5l" Sep 4 17:33:23.434742 kubelet[2521]: E0904 17:33:23.434680 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-545cd5b56d-dcn5l" Sep 4 17:33:23.434767 kubelet[2521]: E0904 17:33:23.434730 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-545cd5b56d-dcn5l_calico-system(8d8af27a-0105-4c20-9900-7445cf2f532f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-545cd5b56d-dcn5l_calico-system(8d8af27a-0105-4c20-9900-7445cf2f532f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-545cd5b56d-dcn5l" podUID="8d8af27a-0105-4c20-9900-7445cf2f532f" Sep 4 17:33:23.712795 systemd[1]: Created slice kubepods-besteffort-podc22bd57a_c877_45b8_9da9_64ec19c9aeb3.slice - libcontainer container kubepods-besteffort-podc22bd57a_c877_45b8_9da9_64ec19c9aeb3.slice. Sep 4 17:33:23.725788 containerd[1436]: time="2024-09-04T17:33:23.725745910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwclf,Uid:c22bd57a-c877-45b8-9da9-64ec19c9aeb3,Namespace:calico-system,Attempt:0,}" Sep 4 17:33:23.813289 kubelet[2521]: I0904 17:33:23.813253 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:23.815246 containerd[1436]: time="2024-09-04T17:33:23.814083839Z" level=info msg="StopPodSandbox for \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\"" Sep 4 17:33:23.815246 containerd[1436]: time="2024-09-04T17:33:23.814291007Z" level=info msg="Ensure that sandbox 92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953 in task-service has been cleanup successfully" Sep 4 17:33:23.826654 kubelet[2521]: I0904 17:33:23.824663 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:23.826783 containerd[1436]: time="2024-09-04T17:33:23.825418883Z" level=info msg="StopPodSandbox for \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\"" Sep 4 17:33:23.826783 containerd[1436]: time="2024-09-04T17:33:23.825644655Z" level=info msg="Ensure that sandbox 486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374 in task-service has been cleanup successfully" Sep 4 17:33:23.831744 kubelet[2521]: I0904 17:33:23.828666 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:23.831868 containerd[1436]: time="2024-09-04T17:33:23.831194609Z" level=info msg="StopPodSandbox for \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\"" Sep 4 17:33:23.831868 containerd[1436]: time="2024-09-04T17:33:23.831405418Z" level=info msg="Ensure that sandbox 527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68 in task-service has been cleanup successfully" Sep 4 17:33:23.866609 containerd[1436]: time="2024-09-04T17:33:23.866527124Z" level=error msg="StopPodSandbox for \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\" failed" error="failed to destroy network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.873071 kubelet[2521]: E0904 17:33:23.872993 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:23.873498 kubelet[2521]: E0904 17:33:23.873252 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953"} Sep 4 17:33:23.873498 kubelet[2521]: E0904 17:33:23.873401 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2630a53-8a70-4d90-bf19-8204dcfa5313\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:23.873498 kubelet[2521]: E0904 17:33:23.873434 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2630a53-8a70-4d90-bf19-8204dcfa5313\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nkntq" podUID="a2630a53-8a70-4d90-bf19-8204dcfa5313" Sep 4 17:33:23.883914 containerd[1436]: time="2024-09-04T17:33:23.883857425Z" level=error msg="StopPodSandbox for \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\" failed" error="failed to destroy network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.884131 kubelet[2521]: E0904 17:33:23.884091 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:23.884184 kubelet[2521]: E0904 17:33:23.884143 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374"} Sep 4 17:33:23.884184 kubelet[2521]: E0904 17:33:23.884176 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea3fe696-b784-45a8-b851-82c8d5cb101a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:23.884300 kubelet[2521]: E0904 17:33:23.884200 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea3fe696-b784-45a8-b851-82c8d5cb101a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rvpjc" podUID="ea3fe696-b784-45a8-b851-82c8d5cb101a" Sep 4 17:33:23.897735 containerd[1436]: time="2024-09-04T17:33:23.896812840Z" level=error msg="StopPodSandbox for \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\" failed" error="failed to destroy network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.897847 kubelet[2521]: E0904 17:33:23.897020 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:23.897847 kubelet[2521]: E0904 17:33:23.897067 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68"} Sep 4 17:33:23.897847 kubelet[2521]: E0904 17:33:23.897099 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8d8af27a-0105-4c20-9900-7445cf2f532f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:23.897847 kubelet[2521]: E0904 17:33:23.897131 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8d8af27a-0105-4c20-9900-7445cf2f532f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-545cd5b56d-dcn5l" podUID="8d8af27a-0105-4c20-9900-7445cf2f532f" Sep 4 17:33:23.898897 containerd[1436]: time="2024-09-04T17:33:23.898846587Z" level=error msg="Failed to destroy network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.899205 containerd[1436]: time="2024-09-04T17:33:23.899176663Z" level=error msg="encountered an error cleaning up failed sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.899252 containerd[1436]: time="2024-09-04T17:33:23.899231996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwclf,Uid:c22bd57a-c877-45b8-9da9-64ec19c9aeb3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.899513 kubelet[2521]: E0904 17:33:23.899451 2521 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:23.899560 kubelet[2521]: E0904 17:33:23.899519 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwclf" Sep 4 17:33:23.899560 kubelet[2521]: E0904 17:33:23.899541 2521 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dwclf" Sep 4 17:33:23.899655 kubelet[2521]: E0904 17:33:23.899599 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dwclf_calico-system(c22bd57a-c877-45b8-9da9-64ec19c9aeb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dwclf_calico-system(c22bd57a-c877-45b8-9da9-64ec19c9aeb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dwclf" podUID="c22bd57a-c877-45b8-9da9-64ec19c9aeb3" Sep 4 17:33:24.030569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374-shm.mount: Deactivated successfully. Sep 4 17:33:24.031613 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68-shm.mount: Deactivated successfully. Sep 4 17:33:24.831406 kubelet[2521]: I0904 17:33:24.831242 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:24.832041 containerd[1436]: time="2024-09-04T17:33:24.831923965Z" level=info msg="StopPodSandbox for \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\"" Sep 4 17:33:24.832266 containerd[1436]: time="2024-09-04T17:33:24.832138412Z" level=info msg="Ensure that sandbox 033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7 in task-service has been cleanup successfully" Sep 4 17:33:24.866453 containerd[1436]: time="2024-09-04T17:33:24.866190995Z" level=error msg="StopPodSandbox for \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\" failed" error="failed to destroy network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:33:24.866797 kubelet[2521]: E0904 17:33:24.866669 2521 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:24.866855 kubelet[2521]: E0904 17:33:24.866809 2521 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7"} Sep 4 17:33:24.866881 kubelet[2521]: E0904 17:33:24.866857 2521 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c22bd57a-c877-45b8-9da9-64ec19c9aeb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:33:24.866934 kubelet[2521]: E0904 17:33:24.866879 2521 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c22bd57a-c877-45b8-9da9-64ec19c9aeb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dwclf" podUID="c22bd57a-c877-45b8-9da9-64ec19c9aeb3" Sep 4 17:33:25.410004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985022550.mount: Deactivated successfully. Sep 4 17:33:25.672225 containerd[1436]: time="2024-09-04T17:33:25.671761219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:25.673499 containerd[1436]: time="2024-09-04T17:33:25.673448944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:33:25.674475 containerd[1436]: time="2024-09-04T17:33:25.674437117Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:25.696494 containerd[1436]: time="2024-09-04T17:33:25.696421709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:25.697255 containerd[1436]: time="2024-09-04T17:33:25.697206958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 2.888677067s" Sep 4 17:33:25.697255 containerd[1436]: time="2024-09-04T17:33:25.697250488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:33:25.709799 containerd[1436]: time="2024-09-04T17:33:25.709743708Z" level=info msg="CreateContainer within sandbox \"3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:33:25.732862 containerd[1436]: time="2024-09-04T17:33:25.732781647Z" level=info msg="CreateContainer within sandbox \"3386d9082754501e2f35f137961ed4af4109aa77a6fc6cb6e823326dbddd5086\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a0af011d705d81ba9cd8ad70976c9c773376b7057407840d66b60800ac9dd9b8\"" Sep 4 17:33:25.735260 containerd[1436]: time="2024-09-04T17:33:25.735195049Z" level=info msg="StartContainer for \"a0af011d705d81ba9cd8ad70976c9c773376b7057407840d66b60800ac9dd9b8\"" Sep 4 17:33:25.805802 systemd[1]: Started cri-containerd-a0af011d705d81ba9cd8ad70976c9c773376b7057407840d66b60800ac9dd9b8.scope - libcontainer container a0af011d705d81ba9cd8ad70976c9c773376b7057407840d66b60800ac9dd9b8. Sep 4 17:33:25.901863 containerd[1436]: time="2024-09-04T17:33:25.901789373Z" level=info msg="StartContainer for \"a0af011d705d81ba9cd8ad70976c9c773376b7057407840d66b60800ac9dd9b8\" returns successfully" Sep 4 17:33:26.049972 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:33:26.050169 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:33:26.840844 kubelet[2521]: E0904 17:33:26.840532 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:26.857386 kubelet[2521]: I0904 17:33:26.857319 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l6ntn" podStartSLOduration=2.260328426 podStartE2EDuration="11.857300334s" podCreationTimestamp="2024-09-04 17:33:15 +0000 UTC" firstStartedPulling="2024-09-04 17:33:16.101414545 +0000 UTC m=+24.506535649" lastFinishedPulling="2024-09-04 17:33:25.698386453 +0000 UTC m=+34.103507557" observedRunningTime="2024-09-04 17:33:26.856254714 +0000 UTC m=+35.261375818" watchObservedRunningTime="2024-09-04 17:33:26.857300334 +0000 UTC m=+35.262421438" Sep 4 17:33:27.850296 kubelet[2521]: E0904 17:33:27.850248 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:29.111753 systemd[1]: Started sshd@7-10.0.0.119:22-10.0.0.1:45588.service - OpenSSH per-connection server daemon (10.0.0.1:45588). Sep 4 17:33:29.157861 sshd[3745]: Accepted publickey for core from 10.0.0.1 port 45588 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:29.159627 sshd[3745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:29.163844 systemd-logind[1418]: New session 8 of user core. Sep 4 17:33:29.173786 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:33:29.389802 sshd[3745]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:29.394348 systemd[1]: sshd@7-10.0.0.119:22-10.0.0.1:45588.service: Deactivated successfully. Sep 4 17:33:29.396112 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:33:29.396675 systemd-logind[1418]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:33:29.397504 systemd-logind[1418]: Removed session 8. Sep 4 17:33:34.405225 systemd[1]: Started sshd@8-10.0.0.119:22-10.0.0.1:45100.service - OpenSSH per-connection server daemon (10.0.0.1:45100). Sep 4 17:33:34.449441 sshd[3890]: Accepted publickey for core from 10.0.0.1 port 45100 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:34.451111 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:34.456141 systemd-logind[1418]: New session 9 of user core. Sep 4 17:33:34.468778 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:33:34.670985 sshd[3890]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:34.674546 systemd[1]: sshd@8-10.0.0.119:22-10.0.0.1:45100.service: Deactivated successfully. Sep 4 17:33:34.676257 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:33:34.677262 systemd-logind[1418]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:33:34.678192 systemd-logind[1418]: Removed session 9. Sep 4 17:33:35.707285 containerd[1436]: time="2024-09-04T17:33:35.707231665Z" level=info msg="StopPodSandbox for \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\"" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.777 [INFO][3947] k8s.go 608: Cleaning up netns ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.778 [INFO][3947] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" iface="eth0" netns="/var/run/netns/cni-eab63835-a2c1-f74f-7967-f5779b8c9e72" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.778 [INFO][3947] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" iface="eth0" netns="/var/run/netns/cni-eab63835-a2c1-f74f-7967-f5779b8c9e72" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.779 [INFO][3947] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" iface="eth0" netns="/var/run/netns/cni-eab63835-a2c1-f74f-7967-f5779b8c9e72" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.779 [INFO][3947] k8s.go 615: Releasing IP address(es) ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.779 [INFO][3947] utils.go 188: Calico CNI releasing IP address ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.849 [INFO][3954] ipam_plugin.go 417: Releasing address using handleID ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.849 [INFO][3954] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.849 [INFO][3954] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.863 [WARNING][3954] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.863 [INFO][3954] ipam_plugin.go 445: Releasing address using workloadID ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.865 [INFO][3954] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:35.871049 containerd[1436]: 2024-09-04 17:33:35.867 [INFO][3947] k8s.go 621: Teardown processing complete. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:35.871589 containerd[1436]: time="2024-09-04T17:33:35.871476999Z" level=info msg="TearDown network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\" successfully" Sep 4 17:33:35.871589 containerd[1436]: time="2024-09-04T17:33:35.871525287Z" level=info msg="StopPodSandbox for \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\" returns successfully" Sep 4 17:33:35.875105 kubelet[2521]: E0904 17:33:35.874898 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:35.875505 containerd[1436]: time="2024-09-04T17:33:35.875322930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rvpjc,Uid:ea3fe696-b784-45a8-b851-82c8d5cb101a,Namespace:kube-system,Attempt:1,}" Sep 4 17:33:35.877056 systemd[1]: run-netns-cni\x2deab63835\x2da2c1\x2df74f\x2d7967\x2df5779b8c9e72.mount: Deactivated successfully. Sep 4 17:33:36.051951 systemd-networkd[1375]: cali230228c82a3: Link UP Sep 4 17:33:36.055985 systemd-networkd[1375]: cali230228c82a3: Gained carrier Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:35.927 [INFO][3974] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:35.941 [INFO][3974] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0 coredns-7db6d8ff4d- kube-system ea3fe696-b784-45a8-b851-82c8d5cb101a 758 0 2024-09-04 17:33:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-rvpjc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali230228c82a3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:35.941 [INFO][3974] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:35.989 [INFO][3997] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" HandleID="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.001 [INFO][3997] ipam_plugin.go 270: Auto assigning IP ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" HandleID="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000300730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-rvpjc", "timestamp":"2024-09-04 17:33:35.989301111 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.001 [INFO][3997] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.001 [INFO][3997] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.002 [INFO][3997] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.005 [INFO][3997] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.011 [INFO][3997] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.015 [INFO][3997] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.017 [INFO][3997] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.019 [INFO][3997] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.019 [INFO][3997] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.021 [INFO][3997] ipam.go 1685: Creating new handle: k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.024 [INFO][3997] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.033 [INFO][3997] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.033 [INFO][3997] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" host="localhost" Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.034 [INFO][3997] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:36.077237 containerd[1436]: 2024-09-04 17:33:36.034 [INFO][3997] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" HandleID="k8s-pod-network.e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:36.077981 containerd[1436]: 2024-09-04 17:33:36.038 [INFO][3974] k8s.go 386: Populated endpoint ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea3fe696-b784-45a8-b851-82c8d5cb101a", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-rvpjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali230228c82a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:36.077981 containerd[1436]: 2024-09-04 17:33:36.039 [INFO][3974] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:36.077981 containerd[1436]: 2024-09-04 17:33:36.039 [INFO][3974] dataplane_linux.go 68: Setting the host side veth name to cali230228c82a3 ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:36.077981 containerd[1436]: 2024-09-04 17:33:36.055 [INFO][3974] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:36.077981 containerd[1436]: 2024-09-04 17:33:36.056 [INFO][3974] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea3fe696-b784-45a8-b851-82c8d5cb101a", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab", Pod:"coredns-7db6d8ff4d-rvpjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali230228c82a3", MAC:"da:06:58:e4:90:3b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:36.077981 containerd[1436]: 2024-09-04 17:33:36.071 [INFO][3974] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rvpjc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:36.094162 containerd[1436]: time="2024-09-04T17:33:36.093460770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:36.094162 containerd[1436]: time="2024-09-04T17:33:36.094015982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:36.094162 containerd[1436]: time="2024-09-04T17:33:36.094032905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:36.094162 containerd[1436]: time="2024-09-04T17:33:36.094049708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:36.115753 systemd[1]: Started cri-containerd-e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab.scope - libcontainer container e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab. Sep 4 17:33:36.125633 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:36.142498 containerd[1436]: time="2024-09-04T17:33:36.142456149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rvpjc,Uid:ea3fe696-b784-45a8-b851-82c8d5cb101a,Namespace:kube-system,Attempt:1,} returns sandbox id \"e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab\"" Sep 4 17:33:36.143498 kubelet[2521]: E0904 17:33:36.143247 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:36.146487 containerd[1436]: time="2024-09-04T17:33:36.146452973Z" level=info msg="CreateContainer within sandbox \"e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:33:36.164847 containerd[1436]: time="2024-09-04T17:33:36.164796140Z" level=info msg="CreateContainer within sandbox \"e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6d4fbb96c3e6ff3e6791fdfc1ddf1e3fc540c11692244d06953eea5cfe21441\"" Sep 4 17:33:36.165335 containerd[1436]: time="2024-09-04T17:33:36.165301864Z" level=info msg="StartContainer for \"b6d4fbb96c3e6ff3e6791fdfc1ddf1e3fc540c11692244d06953eea5cfe21441\"" Sep 4 17:33:36.197089 systemd[1]: Started cri-containerd-b6d4fbb96c3e6ff3e6791fdfc1ddf1e3fc540c11692244d06953eea5cfe21441.scope - libcontainer container b6d4fbb96c3e6ff3e6791fdfc1ddf1e3fc540c11692244d06953eea5cfe21441. Sep 4 17:33:36.219933 containerd[1436]: time="2024-09-04T17:33:36.219887692Z" level=info msg="StartContainer for \"b6d4fbb96c3e6ff3e6791fdfc1ddf1e3fc540c11692244d06953eea5cfe21441\" returns successfully" Sep 4 17:33:36.706368 containerd[1436]: time="2024-09-04T17:33:36.706291091Z" level=info msg="StopPodSandbox for \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\"" Sep 4 17:33:36.706368 containerd[1436]: time="2024-09-04T17:33:36.706336859Z" level=info msg="StopPodSandbox for \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\"" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.755 [INFO][4133] k8s.go 608: Cleaning up netns ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.756 [INFO][4133] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" iface="eth0" netns="/var/run/netns/cni-746c560a-3b70-793a-9493-25893c91e8f1" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.757 [INFO][4133] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" iface="eth0" netns="/var/run/netns/cni-746c560a-3b70-793a-9493-25893c91e8f1" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.757 [INFO][4133] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" iface="eth0" netns="/var/run/netns/cni-746c560a-3b70-793a-9493-25893c91e8f1" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.758 [INFO][4133] k8s.go 615: Releasing IP address(es) ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.758 [INFO][4133] utils.go 188: Calico CNI releasing IP address ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.779 [INFO][4147] ipam_plugin.go 417: Releasing address using handleID ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.779 [INFO][4147] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.779 [INFO][4147] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.788 [WARNING][4147] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.788 [INFO][4147] ipam_plugin.go 445: Releasing address using workloadID ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.790 [INFO][4147] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:36.795005 containerd[1436]: 2024-09-04 17:33:36.791 [INFO][4133] k8s.go 621: Teardown processing complete. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:36.795856 containerd[1436]: time="2024-09-04T17:33:36.795141490Z" level=info msg="TearDown network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\" successfully" Sep 4 17:33:36.795856 containerd[1436]: time="2024-09-04T17:33:36.795169175Z" level=info msg="StopPodSandbox for \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\" returns successfully" Sep 4 17:33:36.796569 containerd[1436]: time="2024-09-04T17:33:36.796188464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545cd5b56d-dcn5l,Uid:8d8af27a-0105-4c20-9900-7445cf2f532f,Namespace:calico-system,Attempt:1,}" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.757 [INFO][4132] k8s.go 608: Cleaning up netns ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.757 [INFO][4132] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" iface="eth0" netns="/var/run/netns/cni-13e4bf57-e9fc-ac3a-23cc-4523018c96c5" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.757 [INFO][4132] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" iface="eth0" netns="/var/run/netns/cni-13e4bf57-e9fc-ac3a-23cc-4523018c96c5" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.758 [INFO][4132] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" iface="eth0" netns="/var/run/netns/cni-13e4bf57-e9fc-ac3a-23cc-4523018c96c5" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.758 [INFO][4132] k8s.go 615: Releasing IP address(es) ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.758 [INFO][4132] utils.go 188: Calico CNI releasing IP address ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.780 [INFO][4148] ipam_plugin.go 417: Releasing address using handleID ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.780 [INFO][4148] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.790 [INFO][4148] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.800 [WARNING][4148] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.800 [INFO][4148] ipam_plugin.go 445: Releasing address using workloadID ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.802 [INFO][4148] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:36.807047 containerd[1436]: 2024-09-04 17:33:36.804 [INFO][4132] k8s.go 621: Teardown processing complete. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:36.807941 containerd[1436]: time="2024-09-04T17:33:36.807171009Z" level=info msg="TearDown network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\" successfully" Sep 4 17:33:36.807941 containerd[1436]: time="2024-09-04T17:33:36.807195773Z" level=info msg="StopPodSandbox for \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\" returns successfully" Sep 4 17:33:36.808292 kubelet[2521]: E0904 17:33:36.808051 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:36.808743 containerd[1436]: time="2024-09-04T17:33:36.808423497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nkntq,Uid:a2630a53-8a70-4d90-bf19-8204dcfa5313,Namespace:kube-system,Attempt:1,}" Sep 4 17:33:36.879313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187086700.mount: Deactivated successfully. Sep 4 17:33:36.879415 systemd[1]: run-netns-cni\x2d13e4bf57\x2de9fc\x2dac3a\x2d23cc\x2d4523018c96c5.mount: Deactivated successfully. Sep 4 17:33:36.879472 systemd[1]: run-netns-cni\x2d746c560a\x2d3b70\x2d793a\x2d9493\x2d25893c91e8f1.mount: Deactivated successfully. Sep 4 17:33:36.884753 kubelet[2521]: E0904 17:33:36.884331 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:36.896833 kubelet[2521]: I0904 17:33:36.896770 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rvpjc" podStartSLOduration=28.89675261 podStartE2EDuration="28.89675261s" podCreationTimestamp="2024-09-04 17:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:36.895045046 +0000 UTC m=+45.300166150" watchObservedRunningTime="2024-09-04 17:33:36.89675261 +0000 UTC m=+45.301873714" Sep 4 17:33:36.966210 systemd-networkd[1375]: calic1900dbc9f0: Link UP Sep 4 17:33:36.966714 systemd-networkd[1375]: calic1900dbc9f0: Gained carrier Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.830 [INFO][4165] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.846 [INFO][4165] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0 calico-kube-controllers-545cd5b56d- calico-system 8d8af27a-0105-4c20-9900-7445cf2f532f 771 0 2024-09-04 17:33:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:545cd5b56d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-545cd5b56d-dcn5l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic1900dbc9f0 [] []}} ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.846 [INFO][4165] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.890 [INFO][4191] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" HandleID="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.905 [INFO][4191] ipam_plugin.go 270: Auto assigning IP ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" HandleID="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003637c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-545cd5b56d-dcn5l", "timestamp":"2024-09-04 17:33:36.89037607 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.905 [INFO][4191] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.905 [INFO][4191] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.905 [INFO][4191] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.912 [INFO][4191] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.926 [INFO][4191] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.934 [INFO][4191] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.940 [INFO][4191] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.943 [INFO][4191] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.943 [INFO][4191] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.946 [INFO][4191] ipam.go 1685: Creating new handle: k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9 Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.953 [INFO][4191] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.958 [INFO][4191] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.959 [INFO][4191] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" host="localhost" Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.959 [INFO][4191] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:36.983498 containerd[1436]: 2024-09-04 17:33:36.959 [INFO][4191] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" HandleID="k8s-pod-network.3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.984198 containerd[1436]: 2024-09-04 17:33:36.963 [INFO][4165] k8s.go 386: Populated endpoint ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0", GenerateName:"calico-kube-controllers-545cd5b56d-", Namespace:"calico-system", SelfLink:"", UID:"8d8af27a-0105-4c20-9900-7445cf2f532f", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545cd5b56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-545cd5b56d-dcn5l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1900dbc9f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:36.984198 containerd[1436]: 2024-09-04 17:33:36.963 [INFO][4165] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.984198 containerd[1436]: 2024-09-04 17:33:36.963 [INFO][4165] dataplane_linux.go 68: Setting the host side veth name to calic1900dbc9f0 ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.984198 containerd[1436]: 2024-09-04 17:33:36.966 [INFO][4165] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:36.984198 containerd[1436]: 2024-09-04 17:33:36.967 [INFO][4165] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0", GenerateName:"calico-kube-controllers-545cd5b56d-", Namespace:"calico-system", SelfLink:"", UID:"8d8af27a-0105-4c20-9900-7445cf2f532f", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545cd5b56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9", Pod:"calico-kube-controllers-545cd5b56d-dcn5l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1900dbc9f0", MAC:"1a:aa:fe:59:e2:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:36.984198 containerd[1436]: 2024-09-04 17:33:36.977 [INFO][4165] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9" Namespace="calico-system" Pod="calico-kube-controllers-545cd5b56d-dcn5l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:37.005718 systemd-networkd[1375]: calid079d1c71b2: Link UP Sep 4 17:33:37.006459 systemd-networkd[1375]: calid079d1c71b2: Gained carrier Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.843 [INFO][4176] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.858 [INFO][4176] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0 coredns-7db6d8ff4d- kube-system a2630a53-8a70-4d90-bf19-8204dcfa5313 772 0 2024-09-04 17:33:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-nkntq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid079d1c71b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.858 [INFO][4176] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.903 [INFO][4196] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" HandleID="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.930 [INFO][4196] ipam_plugin.go 270: Auto assigning IP ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" HandleID="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-nkntq", "timestamp":"2024-09-04 17:33:36.903652076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.931 [INFO][4196] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.959 [INFO][4196] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.959 [INFO][4196] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.961 [INFO][4196] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.969 [INFO][4196] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.974 [INFO][4196] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.979 [INFO][4196] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.984 [INFO][4196] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.984 [INFO][4196] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.986 [INFO][4196] ipam.go 1685: Creating new handle: k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8 Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.991 [INFO][4196] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.998 [INFO][4196] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.998 [INFO][4196] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" host="localhost" Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.998 [INFO][4196] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:37.029964 containerd[1436]: 2024-09-04 17:33:36.998 [INFO][4196] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" HandleID="k8s-pod-network.256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:37.030696 containerd[1436]: 2024-09-04 17:33:37.000 [INFO][4176] k8s.go 386: Populated endpoint ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2630a53-8a70-4d90-bf19-8204dcfa5313", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-nkntq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid079d1c71b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:37.030696 containerd[1436]: 2024-09-04 17:33:37.000 [INFO][4176] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:37.030696 containerd[1436]: 2024-09-04 17:33:37.000 [INFO][4176] dataplane_linux.go 68: Setting the host side veth name to calid079d1c71b2 ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:37.030696 containerd[1436]: 2024-09-04 17:33:37.006 [INFO][4176] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:37.030696 containerd[1436]: 2024-09-04 17:33:37.007 [INFO][4176] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2630a53-8a70-4d90-bf19-8204dcfa5313", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8", Pod:"coredns-7db6d8ff4d-nkntq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid079d1c71b2", MAC:"0e:de:ba:cf:73:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:37.030696 containerd[1436]: 2024-09-04 17:33:37.024 [INFO][4176] k8s.go 500: Wrote updated endpoint to datastore ContainerID="256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nkntq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:37.034227 containerd[1436]: time="2024-09-04T17:33:37.034006069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:37.034227 containerd[1436]: time="2024-09-04T17:33:37.034067199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:37.034227 containerd[1436]: time="2024-09-04T17:33:37.034097603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:37.034227 containerd[1436]: time="2024-09-04T17:33:37.034114126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:37.069846 systemd[1]: Started cri-containerd-3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9.scope - libcontainer container 3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9. Sep 4 17:33:37.083466 containerd[1436]: time="2024-09-04T17:33:37.083326992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:37.083466 containerd[1436]: time="2024-09-04T17:33:37.083389523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:37.083466 containerd[1436]: time="2024-09-04T17:33:37.083404925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:37.083466 containerd[1436]: time="2024-09-04T17:33:37.083415527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:37.089202 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:37.109799 systemd[1]: Started cri-containerd-256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8.scope - libcontainer container 256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8. Sep 4 17:33:37.120039 containerd[1436]: time="2024-09-04T17:33:37.119944885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545cd5b56d-dcn5l,Uid:8d8af27a-0105-4c20-9900-7445cf2f532f,Namespace:calico-system,Attempt:1,} returns sandbox id \"3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9\"" Sep 4 17:33:37.122442 containerd[1436]: time="2024-09-04T17:33:37.122369000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:33:37.130375 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:37.138704 systemd-networkd[1375]: cali230228c82a3: Gained IPv6LL Sep 4 17:33:37.147481 containerd[1436]: time="2024-09-04T17:33:37.147430007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nkntq,Uid:a2630a53-8a70-4d90-bf19-8204dcfa5313,Namespace:kube-system,Attempt:1,} returns sandbox id \"256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8\"" Sep 4 17:33:37.148428 kubelet[2521]: E0904 17:33:37.148406 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:37.151839 containerd[1436]: time="2024-09-04T17:33:37.151797239Z" level=info msg="CreateContainer within sandbox \"256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:33:37.162418 containerd[1436]: time="2024-09-04T17:33:37.162281509Z" level=info msg="CreateContainer within sandbox \"256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d314e9b47e4f6b117e30dca7a0896dbc5f888be3ed34bf37ceed05d28a0fa009\"" Sep 4 17:33:37.163889 containerd[1436]: time="2024-09-04T17:33:37.163041273Z" level=info msg="StartContainer for \"d314e9b47e4f6b117e30dca7a0896dbc5f888be3ed34bf37ceed05d28a0fa009\"" Sep 4 17:33:37.188113 systemd[1]: Started cri-containerd-d314e9b47e4f6b117e30dca7a0896dbc5f888be3ed34bf37ceed05d28a0fa009.scope - libcontainer container d314e9b47e4f6b117e30dca7a0896dbc5f888be3ed34bf37ceed05d28a0fa009. Sep 4 17:33:37.216943 containerd[1436]: time="2024-09-04T17:33:37.216824805Z" level=info msg="StartContainer for \"d314e9b47e4f6b117e30dca7a0896dbc5f888be3ed34bf37ceed05d28a0fa009\" returns successfully" Sep 4 17:33:37.890675 kubelet[2521]: E0904 17:33:37.888258 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:37.890675 kubelet[2521]: E0904 17:33:37.888352 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:37.914612 kubelet[2521]: I0904 17:33:37.914392 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nkntq" podStartSLOduration=29.914373050000002 podStartE2EDuration="29.91437305s" podCreationTimestamp="2024-09-04 17:33:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:33:37.898242699 +0000 UTC m=+46.303363803" watchObservedRunningTime="2024-09-04 17:33:37.91437305 +0000 UTC m=+46.319494154" Sep 4 17:33:38.369622 containerd[1436]: time="2024-09-04T17:33:38.369181544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:38.369622 containerd[1436]: time="2024-09-04T17:33:38.369591290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:33:38.370686 containerd[1436]: time="2024-09-04T17:33:38.370654780Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:38.372838 containerd[1436]: time="2024-09-04T17:33:38.372776520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:38.373696 containerd[1436]: time="2024-09-04T17:33:38.373659422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 1.251250935s" Sep 4 17:33:38.373768 containerd[1436]: time="2024-09-04T17:33:38.373696587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:33:38.383620 containerd[1436]: time="2024-09-04T17:33:38.383474114Z" level=info msg="CreateContainer within sandbox \"3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:33:38.397215 containerd[1436]: time="2024-09-04T17:33:38.397158067Z" level=info msg="CreateContainer within sandbox \"3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eb2745df546045db2c48ac07bf6bed426a564bfb1c8b0315aa19185a8ac6efa3\"" Sep 4 17:33:38.399775 containerd[1436]: time="2024-09-04T17:33:38.397833816Z" level=info msg="StartContainer for \"eb2745df546045db2c48ac07bf6bed426a564bfb1c8b0315aa19185a8ac6efa3\"" Sep 4 17:33:38.434840 systemd[1]: Started cri-containerd-eb2745df546045db2c48ac07bf6bed426a564bfb1c8b0315aa19185a8ac6efa3.scope - libcontainer container eb2745df546045db2c48ac07bf6bed426a564bfb1c8b0315aa19185a8ac6efa3. Sep 4 17:33:38.526822 containerd[1436]: time="2024-09-04T17:33:38.526735473Z" level=info msg="StartContainer for \"eb2745df546045db2c48ac07bf6bed426a564bfb1c8b0315aa19185a8ac6efa3\" returns successfully" Sep 4 17:33:38.707420 containerd[1436]: time="2024-09-04T17:33:38.707378423Z" level=info msg="StopPodSandbox for \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\"" Sep 4 17:33:38.802722 systemd-networkd[1375]: calic1900dbc9f0: Gained IPv6LL Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.762 [INFO][4468] k8s.go 608: Cleaning up netns ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.762 [INFO][4468] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" iface="eth0" netns="/var/run/netns/cni-7dae0dc4-5149-1e64-7f7d-59d8aea0bbed" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.762 [INFO][4468] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" iface="eth0" netns="/var/run/netns/cni-7dae0dc4-5149-1e64-7f7d-59d8aea0bbed" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.763 [INFO][4468] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" iface="eth0" netns="/var/run/netns/cni-7dae0dc4-5149-1e64-7f7d-59d8aea0bbed" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.763 [INFO][4468] k8s.go 615: Releasing IP address(es) ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.763 [INFO][4468] utils.go 188: Calico CNI releasing IP address ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.791 [INFO][4476] ipam_plugin.go 417: Releasing address using handleID ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.791 [INFO][4476] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.791 [INFO][4476] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.799 [WARNING][4476] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.799 [INFO][4476] ipam_plugin.go 445: Releasing address using workloadID ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.800 [INFO][4476] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:38.807108 containerd[1436]: 2024-09-04 17:33:38.804 [INFO][4468] k8s.go 621: Teardown processing complete. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:38.808216 containerd[1436]: time="2024-09-04T17:33:38.807252628Z" level=info msg="TearDown network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\" successfully" Sep 4 17:33:38.808216 containerd[1436]: time="2024-09-04T17:33:38.807333201Z" level=info msg="StopPodSandbox for \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\" returns successfully" Sep 4 17:33:38.808216 containerd[1436]: time="2024-09-04T17:33:38.807975144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwclf,Uid:c22bd57a-c877-45b8-9da9-64ec19c9aeb3,Namespace:calico-system,Attempt:1,}" Sep 4 17:33:38.866713 systemd-networkd[1375]: calid079d1c71b2: Gained IPv6LL Sep 4 17:33:38.880756 systemd[1]: run-netns-cni\x2d7dae0dc4\x2d5149\x2d1e64\x2d7f7d\x2d59d8aea0bbed.mount: Deactivated successfully. Sep 4 17:33:38.894775 kubelet[2521]: E0904 17:33:38.894738 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:38.970722 kubelet[2521]: I0904 17:33:38.968787 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-545cd5b56d-dcn5l" podStartSLOduration=22.716090226 podStartE2EDuration="23.968767472s" podCreationTimestamp="2024-09-04 17:33:15 +0000 UTC" firstStartedPulling="2024-09-04 17:33:37.121703371 +0000 UTC m=+45.526824475" lastFinishedPulling="2024-09-04 17:33:38.374380617 +0000 UTC m=+46.779501721" observedRunningTime="2024-09-04 17:33:38.907862912 +0000 UTC m=+47.312984016" watchObservedRunningTime="2024-09-04 17:33:38.968767472 +0000 UTC m=+47.373888576" Sep 4 17:33:38.971234 systemd-networkd[1375]: calie7a526c7dd2: Link UP Sep 4 17:33:38.971542 systemd-networkd[1375]: calie7a526c7dd2: Gained carrier Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.841 [INFO][4484] utils.go 100: File /var/lib/calico/mtu does not exist Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.854 [INFO][4484] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dwclf-eth0 csi-node-driver- calico-system c22bd57a-c877-45b8-9da9-64ec19c9aeb3 829 0 2024-09-04 17:33:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-dwclf eth0 default [] [] [kns.calico-system ksa.calico-system.default] calie7a526c7dd2 [] []}} ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.854 [INFO][4484] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.905 [INFO][4497] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" HandleID="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.927 [INFO][4497] ipam_plugin.go 270: Auto assigning IP ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" HandleID="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2e60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dwclf", "timestamp":"2024-09-04 17:33:38.904087667 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.927 [INFO][4497] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.927 [INFO][4497] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.927 [INFO][4497] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.929 [INFO][4497] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.934 [INFO][4497] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.941 [INFO][4497] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.944 [INFO][4497] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.946 [INFO][4497] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.946 [INFO][4497] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.948 [INFO][4497] ipam.go 1685: Creating new handle: k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09 Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.955 [INFO][4497] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.960 [INFO][4497] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.960 [INFO][4497] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" host="localhost" Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.961 [INFO][4497] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:38.989699 containerd[1436]: 2024-09-04 17:33:38.961 [INFO][4497] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" HandleID="k8s-pod-network.cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.990279 containerd[1436]: 2024-09-04 17:33:38.963 [INFO][4484] k8s.go 386: Populated endpoint ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dwclf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c22bd57a-c877-45b8-9da9-64ec19c9aeb3", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dwclf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie7a526c7dd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:38.990279 containerd[1436]: 2024-09-04 17:33:38.963 [INFO][4484] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.990279 containerd[1436]: 2024-09-04 17:33:38.963 [INFO][4484] dataplane_linux.go 68: Setting the host side veth name to calie7a526c7dd2 ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.990279 containerd[1436]: 2024-09-04 17:33:38.971 [INFO][4484] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:38.990279 containerd[1436]: 2024-09-04 17:33:38.972 [INFO][4484] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dwclf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c22bd57a-c877-45b8-9da9-64ec19c9aeb3", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09", Pod:"csi-node-driver-dwclf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie7a526c7dd2", MAC:"2a:1e:77:c7:82:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:38.990279 containerd[1436]: 2024-09-04 17:33:38.984 [INFO][4484] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09" Namespace="calico-system" Pod="csi-node-driver-dwclf" WorkloadEndpoint="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:39.022331 containerd[1436]: time="2024-09-04T17:33:39.010955969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:33:39.022331 containerd[1436]: time="2024-09-04T17:33:39.022245308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:39.022959 containerd[1436]: time="2024-09-04T17:33:39.022738946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:33:39.022959 containerd[1436]: time="2024-09-04T17:33:39.022765750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:33:39.063780 systemd[1]: Started cri-containerd-cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09.scope - libcontainer container cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09. Sep 4 17:33:39.075860 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:33:39.091270 containerd[1436]: time="2024-09-04T17:33:39.091226699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dwclf,Uid:c22bd57a-c877-45b8-9da9-64ec19c9aeb3,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09\"" Sep 4 17:33:39.094530 containerd[1436]: time="2024-09-04T17:33:39.093429967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:33:39.682350 systemd[1]: Started sshd@9-10.0.0.119:22-10.0.0.1:45214.service - OpenSSH per-connection server daemon (10.0.0.1:45214). Sep 4 17:33:39.737013 sshd[4607]: Accepted publickey for core from 10.0.0.1 port 45214 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:39.738785 sshd[4607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:39.743180 systemd-logind[1418]: New session 10 of user core. Sep 4 17:33:39.757820 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:33:39.897077 kubelet[2521]: E0904 17:33:39.897047 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:40.039989 sshd[4607]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:40.048306 containerd[1436]: time="2024-09-04T17:33:40.048226047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:40.048945 containerd[1436]: time="2024-09-04T17:33:40.048913954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:33:40.049539 systemd[1]: sshd@9-10.0.0.119:22-10.0.0.1:45214.service: Deactivated successfully. Sep 4 17:33:40.050130 containerd[1436]: time="2024-09-04T17:33:40.050103178Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:40.051903 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:33:40.053050 containerd[1436]: time="2024-09-04T17:33:40.052972863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:40.054458 containerd[1436]: time="2024-09-04T17:33:40.053694815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 959.033295ms" Sep 4 17:33:40.054458 containerd[1436]: time="2024-09-04T17:33:40.053731101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:33:40.054669 systemd-logind[1418]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:33:40.057444 containerd[1436]: time="2024-09-04T17:33:40.057384908Z" level=info msg="CreateContainer within sandbox \"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:33:40.062180 systemd[1]: Started sshd@10-10.0.0.119:22-10.0.0.1:45216.service - OpenSSH per-connection server daemon (10.0.0.1:45216). Sep 4 17:33:40.063526 systemd-logind[1418]: Removed session 10. Sep 4 17:33:40.079183 containerd[1436]: time="2024-09-04T17:33:40.079053909Z" level=info msg="CreateContainer within sandbox \"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"05422b7bfe837e76952542e04a2d8f2afa8e0550aea9ca26987cdbb14b988268\"" Sep 4 17:33:40.079653 containerd[1436]: time="2024-09-04T17:33:40.079628358Z" level=info msg="StartContainer for \"05422b7bfe837e76952542e04a2d8f2afa8e0550aea9ca26987cdbb14b988268\"" Sep 4 17:33:40.103981 sshd[4626]: Accepted publickey for core from 10.0.0.1 port 45216 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:40.105598 sshd[4626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:40.112390 systemd-logind[1418]: New session 11 of user core. Sep 4 17:33:40.123787 systemd[1]: Started cri-containerd-05422b7bfe837e76952542e04a2d8f2afa8e0550aea9ca26987cdbb14b988268.scope - libcontainer container 05422b7bfe837e76952542e04a2d8f2afa8e0550aea9ca26987cdbb14b988268. Sep 4 17:33:40.124961 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:33:40.156801 containerd[1436]: time="2024-09-04T17:33:40.156757562Z" level=info msg="StartContainer for \"05422b7bfe837e76952542e04a2d8f2afa8e0550aea9ca26987cdbb14b988268\" returns successfully" Sep 4 17:33:40.158667 containerd[1436]: time="2024-09-04T17:33:40.158418979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:33:40.352789 sshd[4626]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:40.364616 systemd[1]: sshd@10-10.0.0.119:22-10.0.0.1:45216.service: Deactivated successfully. Sep 4 17:33:40.367970 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:33:40.370111 systemd-logind[1418]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:33:40.378918 systemd[1]: Started sshd@11-10.0.0.119:22-10.0.0.1:45232.service - OpenSSH per-connection server daemon (10.0.0.1:45232). Sep 4 17:33:40.379885 systemd-logind[1418]: Removed session 11. Sep 4 17:33:40.418952 sshd[4696]: Accepted publickey for core from 10.0.0.1 port 45232 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:40.420282 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:40.425611 systemd-logind[1418]: New session 12 of user core. Sep 4 17:33:40.432755 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:33:40.605402 sshd[4696]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:40.609099 systemd[1]: sshd@11-10.0.0.119:22-10.0.0.1:45232.service: Deactivated successfully. Sep 4 17:33:40.610836 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:33:40.612542 systemd-logind[1418]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:33:40.613335 systemd-logind[1418]: Removed session 12. Sep 4 17:33:40.914818 systemd-networkd[1375]: calie7a526c7dd2: Gained IPv6LL Sep 4 17:33:41.372463 containerd[1436]: time="2024-09-04T17:33:41.371804280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:41.372463 containerd[1436]: time="2024-09-04T17:33:41.372248628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:33:41.373407 containerd[1436]: time="2024-09-04T17:33:41.373379321Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:41.376200 containerd[1436]: time="2024-09-04T17:33:41.376125180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:33:41.377269 containerd[1436]: time="2024-09-04T17:33:41.376792322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.218334377s" Sep 4 17:33:41.377269 containerd[1436]: time="2024-09-04T17:33:41.376830448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:33:41.379123 containerd[1436]: time="2024-09-04T17:33:41.379093874Z" level=info msg="CreateContainer within sandbox \"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:33:41.394014 containerd[1436]: time="2024-09-04T17:33:41.393970667Z" level=info msg="CreateContainer within sandbox \"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"70c560f2b4d16c3bd0c008e9a83f3b14ca6780c89bea01b6b3c6118741777d32\"" Sep 4 17:33:41.394493 containerd[1436]: time="2024-09-04T17:33:41.394474064Z" level=info msg="StartContainer for \"70c560f2b4d16c3bd0c008e9a83f3b14ca6780c89bea01b6b3c6118741777d32\"" Sep 4 17:33:41.429729 systemd[1]: Started cri-containerd-70c560f2b4d16c3bd0c008e9a83f3b14ca6780c89bea01b6b3c6118741777d32.scope - libcontainer container 70c560f2b4d16c3bd0c008e9a83f3b14ca6780c89bea01b6b3c6118741777d32. Sep 4 17:33:41.453909 containerd[1436]: time="2024-09-04T17:33:41.453869218Z" level=info msg="StartContainer for \"70c560f2b4d16c3bd0c008e9a83f3b14ca6780c89bea01b6b3c6118741777d32\" returns successfully" Sep 4 17:33:41.798834 kubelet[2521]: I0904 17:33:41.798367 2521 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:33:41.801249 kubelet[2521]: I0904 17:33:41.801209 2521 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:33:41.915433 kubelet[2521]: I0904 17:33:41.915363 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dwclf" podStartSLOduration=24.630799062 podStartE2EDuration="26.91534676s" podCreationTimestamp="2024-09-04 17:33:15 +0000 UTC" firstStartedPulling="2024-09-04 17:33:39.093145362 +0000 UTC m=+47.498266426" lastFinishedPulling="2024-09-04 17:33:41.37769302 +0000 UTC m=+49.782814124" observedRunningTime="2024-09-04 17:33:41.914149697 +0000 UTC m=+50.319270761" watchObservedRunningTime="2024-09-04 17:33:41.91534676 +0000 UTC m=+50.320467864" Sep 4 17:33:43.464937 kubelet[2521]: I0904 17:33:43.464894 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:33:43.466319 kubelet[2521]: E0904 17:33:43.466099 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:43.881610 kernel: bpftool[4851]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:33:43.908787 kubelet[2521]: E0904 17:33:43.908654 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:44.051201 systemd-networkd[1375]: vxlan.calico: Link UP Sep 4 17:33:44.051214 systemd-networkd[1375]: vxlan.calico: Gained carrier Sep 4 17:33:45.617882 systemd[1]: Started sshd@12-10.0.0.119:22-10.0.0.1:52164.service - OpenSSH per-connection server daemon (10.0.0.1:52164). Sep 4 17:33:45.661970 sshd[4963]: Accepted publickey for core from 10.0.0.1 port 52164 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:45.663676 sshd[4963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:45.667645 systemd-logind[1418]: New session 13 of user core. Sep 4 17:33:45.675727 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:33:45.922838 sshd[4963]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:45.926057 systemd[1]: sshd@12-10.0.0.119:22-10.0.0.1:52164.service: Deactivated successfully. Sep 4 17:33:45.928010 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:33:45.929738 systemd-logind[1418]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:33:45.930500 systemd-logind[1418]: Removed session 13. Sep 4 17:33:45.971034 systemd-networkd[1375]: vxlan.calico: Gained IPv6LL Sep 4 17:33:50.940049 systemd[1]: Started sshd@13-10.0.0.119:22-10.0.0.1:52192.service - OpenSSH per-connection server daemon (10.0.0.1:52192). Sep 4 17:33:50.995986 sshd[4987]: Accepted publickey for core from 10.0.0.1 port 52192 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:50.999265 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:51.009876 systemd-logind[1418]: New session 14 of user core. Sep 4 17:33:51.028828 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:33:51.236343 sshd[4987]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:51.241726 systemd[1]: sshd@13-10.0.0.119:22-10.0.0.1:52192.service: Deactivated successfully. Sep 4 17:33:51.244499 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:33:51.245840 systemd-logind[1418]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:33:51.247336 systemd-logind[1418]: Removed session 14. Sep 4 17:33:51.436685 kubelet[2521]: E0904 17:33:51.436649 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:33:51.699830 containerd[1436]: time="2024-09-04T17:33:51.699737385Z" level=info msg="StopPodSandbox for \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\"" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.762 [WARNING][5039] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea3fe696-b784-45a8-b851-82c8d5cb101a", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab", Pod:"coredns-7db6d8ff4d-rvpjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali230228c82a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.762 [INFO][5039] k8s.go 608: Cleaning up netns ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.762 [INFO][5039] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" iface="eth0" netns="" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.762 [INFO][5039] k8s.go 615: Releasing IP address(es) ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.762 [INFO][5039] utils.go 188: Calico CNI releasing IP address ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.791 [INFO][5049] ipam_plugin.go 417: Releasing address using handleID ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.791 [INFO][5049] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.791 [INFO][5049] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.804 [WARNING][5049] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.804 [INFO][5049] ipam_plugin.go 445: Releasing address using workloadID ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.806 [INFO][5049] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:51.809910 containerd[1436]: 2024-09-04 17:33:51.807 [INFO][5039] k8s.go 621: Teardown processing complete. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.810737 containerd[1436]: time="2024-09-04T17:33:51.810442974Z" level=info msg="TearDown network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\" successfully" Sep 4 17:33:51.810737 containerd[1436]: time="2024-09-04T17:33:51.810474938Z" level=info msg="StopPodSandbox for \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\" returns successfully" Sep 4 17:33:51.811210 containerd[1436]: time="2024-09-04T17:33:51.811183715Z" level=info msg="RemovePodSandbox for \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\"" Sep 4 17:33:51.816659 containerd[1436]: time="2024-09-04T17:33:51.816517321Z" level=info msg="Forcibly stopping sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\"" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.855 [WARNING][5072] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ea3fe696-b784-45a8-b851-82c8d5cb101a", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e16e97ce181be9605b4bb9af2dc47f7b92301934ebcefec63e0dde82fc7cc2ab", Pod:"coredns-7db6d8ff4d-rvpjc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali230228c82a3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.856 [INFO][5072] k8s.go 608: Cleaning up netns ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.856 [INFO][5072] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" iface="eth0" netns="" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.857 [INFO][5072] k8s.go 615: Releasing IP address(es) ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.857 [INFO][5072] utils.go 188: Calico CNI releasing IP address ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.883 [INFO][5079] ipam_plugin.go 417: Releasing address using handleID ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.883 [INFO][5079] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.883 [INFO][5079] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.894 [WARNING][5079] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.894 [INFO][5079] ipam_plugin.go 445: Releasing address using workloadID ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" HandleID="k8s-pod-network.486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Workload="localhost-k8s-coredns--7db6d8ff4d--rvpjc-eth0" Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.895 [INFO][5079] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:51.898878 containerd[1436]: 2024-09-04 17:33:51.897 [INFO][5072] k8s.go 621: Teardown processing complete. ContainerID="486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374" Sep 4 17:33:51.899676 containerd[1436]: time="2024-09-04T17:33:51.898918217Z" level=info msg="TearDown network for sandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\" successfully" Sep 4 17:33:51.904958 containerd[1436]: time="2024-09-04T17:33:51.904898991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:51.906814 containerd[1436]: time="2024-09-04T17:33:51.905026048Z" level=info msg="RemovePodSandbox \"486983729b6979042e2560087542c4bcdb4a321e9e4d61e613617fa1296d0374\" returns successfully" Sep 4 17:33:51.907411 containerd[1436]: time="2024-09-04T17:33:51.907380688Z" level=info msg="StopPodSandbox for \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\"" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.948 [WARNING][5102] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2630a53-8a70-4d90-bf19-8204dcfa5313", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8", Pod:"coredns-7db6d8ff4d-nkntq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid079d1c71b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.948 [INFO][5102] k8s.go 608: Cleaning up netns ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.948 [INFO][5102] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" iface="eth0" netns="" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.948 [INFO][5102] k8s.go 615: Releasing IP address(es) ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.948 [INFO][5102] utils.go 188: Calico CNI releasing IP address ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.971 [INFO][5110] ipam_plugin.go 417: Releasing address using handleID ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.971 [INFO][5110] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.971 [INFO][5110] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.980 [WARNING][5110] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.980 [INFO][5110] ipam_plugin.go 445: Releasing address using workloadID ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.983 [INFO][5110] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:51.986425 containerd[1436]: 2024-09-04 17:33:51.984 [INFO][5102] k8s.go 621: Teardown processing complete. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:51.986425 containerd[1436]: time="2024-09-04T17:33:51.986356398Z" level=info msg="TearDown network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\" successfully" Sep 4 17:33:51.986425 containerd[1436]: time="2024-09-04T17:33:51.986383762Z" level=info msg="StopPodSandbox for \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\" returns successfully" Sep 4 17:33:51.986999 containerd[1436]: time="2024-09-04T17:33:51.986832863Z" level=info msg="RemovePodSandbox for \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\"" Sep 4 17:33:51.986999 containerd[1436]: time="2024-09-04T17:33:51.986862827Z" level=info msg="Forcibly stopping sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\"" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.026 [WARNING][5133] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a2630a53-8a70-4d90-bf19-8204dcfa5313", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"256e6a832a6805009398392f74fa84bb0c28dc8222690c6905dd5d2accf3c4c8", Pod:"coredns-7db6d8ff4d-nkntq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid079d1c71b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.026 [INFO][5133] k8s.go 608: Cleaning up netns ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.027 [INFO][5133] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" iface="eth0" netns="" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.027 [INFO][5133] k8s.go 615: Releasing IP address(es) ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.027 [INFO][5133] utils.go 188: Calico CNI releasing IP address ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.049 [INFO][5142] ipam_plugin.go 417: Releasing address using handleID ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.049 [INFO][5142] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.049 [INFO][5142] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.058 [WARNING][5142] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.058 [INFO][5142] ipam_plugin.go 445: Releasing address using workloadID ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" HandleID="k8s-pod-network.92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Workload="localhost-k8s-coredns--7db6d8ff4d--nkntq-eth0" Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.060 [INFO][5142] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:52.064844 containerd[1436]: 2024-09-04 17:33:52.062 [INFO][5133] k8s.go 621: Teardown processing complete. ContainerID="92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953" Sep 4 17:33:52.065344 containerd[1436]: time="2024-09-04T17:33:52.064895455Z" level=info msg="TearDown network for sandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\" successfully" Sep 4 17:33:52.068865 containerd[1436]: time="2024-09-04T17:33:52.068811144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:52.068991 containerd[1436]: time="2024-09-04T17:33:52.068898275Z" level=info msg="RemovePodSandbox \"92518523b0782589f36457fe45a474be91a5ac193649dbdcaa91e173fec70953\" returns successfully" Sep 4 17:33:52.069545 containerd[1436]: time="2024-09-04T17:33:52.069511518Z" level=info msg="StopPodSandbox for \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\"" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.114 [WARNING][5165] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0", GenerateName:"calico-kube-controllers-545cd5b56d-", Namespace:"calico-system", SelfLink:"", UID:"8d8af27a-0105-4c20-9900-7445cf2f532f", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545cd5b56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9", Pod:"calico-kube-controllers-545cd5b56d-dcn5l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1900dbc9f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.115 [INFO][5165] k8s.go 608: Cleaning up netns ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.115 [INFO][5165] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" iface="eth0" netns="" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.115 [INFO][5165] k8s.go 615: Releasing IP address(es) ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.115 [INFO][5165] utils.go 188: Calico CNI releasing IP address ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.133 [INFO][5173] ipam_plugin.go 417: Releasing address using handleID ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.133 [INFO][5173] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.134 [INFO][5173] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.146 [WARNING][5173] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.146 [INFO][5173] ipam_plugin.go 445: Releasing address using workloadID ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.148 [INFO][5173] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:52.152796 containerd[1436]: 2024-09-04 17:33:52.150 [INFO][5165] k8s.go 621: Teardown processing complete. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.152796 containerd[1436]: time="2024-09-04T17:33:52.152832724Z" level=info msg="TearDown network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\" successfully" Sep 4 17:33:52.152796 containerd[1436]: time="2024-09-04T17:33:52.152913095Z" level=info msg="StopPodSandbox for \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\" returns successfully" Sep 4 17:33:52.153828 containerd[1436]: time="2024-09-04T17:33:52.153382758Z" level=info msg="RemovePodSandbox for \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\"" Sep 4 17:33:52.153828 containerd[1436]: time="2024-09-04T17:33:52.153412282Z" level=info msg="Forcibly stopping sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\"" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.191 [WARNING][5196] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0", GenerateName:"calico-kube-controllers-545cd5b56d-", Namespace:"calico-system", SelfLink:"", UID:"8d8af27a-0105-4c20-9900-7445cf2f532f", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545cd5b56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3935517439a5149f69840d4829f19be87f0fd9ee34f534d19c62912658c491c9", Pod:"calico-kube-controllers-545cd5b56d-dcn5l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic1900dbc9f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.191 [INFO][5196] k8s.go 608: Cleaning up netns ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.191 [INFO][5196] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" iface="eth0" netns="" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.191 [INFO][5196] k8s.go 615: Releasing IP address(es) ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.191 [INFO][5196] utils.go 188: Calico CNI releasing IP address ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.215 [INFO][5203] ipam_plugin.go 417: Releasing address using handleID ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.216 [INFO][5203] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.216 [INFO][5203] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.225 [WARNING][5203] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.225 [INFO][5203] ipam_plugin.go 445: Releasing address using workloadID ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" HandleID="k8s-pod-network.527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Workload="localhost-k8s-calico--kube--controllers--545cd5b56d--dcn5l-eth0" Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.228 [INFO][5203] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:52.232417 containerd[1436]: 2024-09-04 17:33:52.230 [INFO][5196] k8s.go 621: Teardown processing complete. ContainerID="527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68" Sep 4 17:33:52.233484 containerd[1436]: time="2024-09-04T17:33:52.232921933Z" level=info msg="TearDown network for sandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\" successfully" Sep 4 17:33:52.236851 containerd[1436]: time="2024-09-04T17:33:52.236649996Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:52.236851 containerd[1436]: time="2024-09-04T17:33:52.236759371Z" level=info msg="RemovePodSandbox \"527eef077ac54640302148989ba004065fd8acc15053a137e6206e48f121ff68\" returns successfully" Sep 4 17:33:52.239661 containerd[1436]: time="2024-09-04T17:33:52.239446974Z" level=info msg="StopPodSandbox for \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\"" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.286 [WARNING][5226] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dwclf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c22bd57a-c877-45b8-9da9-64ec19c9aeb3", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09", Pod:"csi-node-driver-dwclf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie7a526c7dd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.286 [INFO][5226] k8s.go 608: Cleaning up netns ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.286 [INFO][5226] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" iface="eth0" netns="" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.286 [INFO][5226] k8s.go 615: Releasing IP address(es) ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.286 [INFO][5226] utils.go 188: Calico CNI releasing IP address ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.320 [INFO][5234] ipam_plugin.go 417: Releasing address using handleID ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.320 [INFO][5234] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.320 [INFO][5234] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.336 [WARNING][5234] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.336 [INFO][5234] ipam_plugin.go 445: Releasing address using workloadID ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.338 [INFO][5234] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:52.342719 containerd[1436]: 2024-09-04 17:33:52.340 [INFO][5226] k8s.go 621: Teardown processing complete. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.343228 containerd[1436]: time="2024-09-04T17:33:52.342762158Z" level=info msg="TearDown network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\" successfully" Sep 4 17:33:52.343228 containerd[1436]: time="2024-09-04T17:33:52.342789922Z" level=info msg="StopPodSandbox for \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\" returns successfully" Sep 4 17:33:52.343387 containerd[1436]: time="2024-09-04T17:33:52.343337715Z" level=info msg="RemovePodSandbox for \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\"" Sep 4 17:33:52.343431 containerd[1436]: time="2024-09-04T17:33:52.343373040Z" level=info msg="Forcibly stopping sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\"" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.381 [WARNING][5257] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dwclf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c22bd57a-c877-45b8-9da9-64ec19c9aeb3", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 33, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc4f8513846a3a3a0af093a80c404259160a84a9e4db732790f938b7a1644e09", Pod:"csi-node-driver-dwclf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie7a526c7dd2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.381 [INFO][5257] k8s.go 608: Cleaning up netns ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.381 [INFO][5257] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" iface="eth0" netns="" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.381 [INFO][5257] k8s.go 615: Releasing IP address(es) ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.381 [INFO][5257] utils.go 188: Calico CNI releasing IP address ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.402 [INFO][5264] ipam_plugin.go 417: Releasing address using handleID ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.402 [INFO][5264] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.402 [INFO][5264] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.411 [WARNING][5264] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.411 [INFO][5264] ipam_plugin.go 445: Releasing address using workloadID ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" HandleID="k8s-pod-network.033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Workload="localhost-k8s-csi--node--driver--dwclf-eth0" Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.413 [INFO][5264] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:33:52.417174 containerd[1436]: 2024-09-04 17:33:52.414 [INFO][5257] k8s.go 621: Teardown processing complete. ContainerID="033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7" Sep 4 17:33:52.417174 containerd[1436]: time="2024-09-04T17:33:52.417083789Z" level=info msg="TearDown network for sandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\" successfully" Sep 4 17:33:52.419902 containerd[1436]: time="2024-09-04T17:33:52.419861404Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:33:52.419902 containerd[1436]: time="2024-09-04T17:33:52.419930093Z" level=info msg="RemovePodSandbox \"033c71adc4e1ad9ea75ddbd6693a8c5a5ace8f6ffe34b2f72999ffec903344f7\" returns successfully" Sep 4 17:33:56.252258 systemd[1]: Started sshd@14-10.0.0.119:22-10.0.0.1:56424.service - OpenSSH per-connection server daemon (10.0.0.1:56424). Sep 4 17:33:56.299639 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 56424 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:56.301137 sshd[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:56.308044 systemd-logind[1418]: New session 15 of user core. Sep 4 17:33:56.319764 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:33:56.471297 sshd[5305]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:56.479179 systemd[1]: sshd@14-10.0.0.119:22-10.0.0.1:56424.service: Deactivated successfully. Sep 4 17:33:56.481537 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:33:56.483046 systemd-logind[1418]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:33:56.488960 systemd[1]: Started sshd@15-10.0.0.119:22-10.0.0.1:56440.service - OpenSSH per-connection server daemon (10.0.0.1:56440). Sep 4 17:33:56.491107 systemd-logind[1418]: Removed session 15. Sep 4 17:33:56.528442 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 56440 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:56.529620 sshd[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:56.535047 systemd-logind[1418]: New session 16 of user core. Sep 4 17:33:56.544096 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:33:56.821210 sshd[5320]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:56.828342 systemd[1]: sshd@15-10.0.0.119:22-10.0.0.1:56440.service: Deactivated successfully. Sep 4 17:33:56.831329 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:33:56.833088 systemd-logind[1418]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:33:56.841275 systemd[1]: Started sshd@16-10.0.0.119:22-10.0.0.1:56450.service - OpenSSH per-connection server daemon (10.0.0.1:56450). Sep 4 17:33:56.844269 systemd-logind[1418]: Removed session 16. Sep 4 17:33:56.882813 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 56450 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:56.884470 sshd[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:56.890566 systemd-logind[1418]: New session 17 of user core. Sep 4 17:33:56.901794 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:33:58.291632 sshd[5332]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:58.304791 systemd[1]: sshd@16-10.0.0.119:22-10.0.0.1:56450.service: Deactivated successfully. Sep 4 17:33:58.307412 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:33:58.311618 systemd-logind[1418]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:33:58.318967 systemd[1]: Started sshd@17-10.0.0.119:22-10.0.0.1:56462.service - OpenSSH per-connection server daemon (10.0.0.1:56462). Sep 4 17:33:58.320820 systemd-logind[1418]: Removed session 17. Sep 4 17:33:58.364183 sshd[5353]: Accepted publickey for core from 10.0.0.1 port 56462 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:58.364720 sshd[5353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:58.369057 systemd-logind[1418]: New session 18 of user core. Sep 4 17:33:58.377794 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:33:58.713690 sshd[5353]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:58.723269 systemd[1]: sshd@17-10.0.0.119:22-10.0.0.1:56462.service: Deactivated successfully. Sep 4 17:33:58.726977 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:33:58.729020 systemd-logind[1418]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:33:58.740241 systemd[1]: Started sshd@18-10.0.0.119:22-10.0.0.1:56472.service - OpenSSH per-connection server daemon (10.0.0.1:56472). Sep 4 17:33:58.741232 systemd-logind[1418]: Removed session 18. Sep 4 17:33:58.774276 sshd[5366]: Accepted publickey for core from 10.0.0.1 port 56472 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:33:58.775698 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:33:58.779649 systemd-logind[1418]: New session 19 of user core. Sep 4 17:33:58.792740 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:33:58.921432 sshd[5366]: pam_unix(sshd:session): session closed for user core Sep 4 17:33:58.924668 systemd[1]: sshd@18-10.0.0.119:22-10.0.0.1:56472.service: Deactivated successfully. Sep 4 17:33:58.926722 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:33:58.927498 systemd-logind[1418]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:33:58.928366 systemd-logind[1418]: Removed session 19. Sep 4 17:33:59.707009 kubelet[2521]: E0904 17:33:59.706615 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:03.934404 systemd[1]: Started sshd@19-10.0.0.119:22-10.0.0.1:32922.service - OpenSSH per-connection server daemon (10.0.0.1:32922). Sep 4 17:34:03.986441 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 32922 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:34:03.988117 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:03.992836 systemd-logind[1418]: New session 20 of user core. Sep 4 17:34:03.998942 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:34:04.195336 sshd[5384]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:04.203260 systemd[1]: sshd@19-10.0.0.119:22-10.0.0.1:32922.service: Deactivated successfully. Sep 4 17:34:04.206819 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:34:04.209385 systemd-logind[1418]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:34:04.210409 systemd-logind[1418]: Removed session 20. Sep 4 17:34:04.468412 kubelet[2521]: I0904 17:34:04.466955 2521 topology_manager.go:215] "Topology Admit Handler" podUID="f2b67990-36ca-4ec8-880d-f1719fe3938c" podNamespace="calico-apiserver" podName="calico-apiserver-76fdb8c5fd-kqkc4" Sep 4 17:34:04.479124 systemd[1]: Created slice kubepods-besteffort-podf2b67990_36ca_4ec8_880d_f1719fe3938c.slice - libcontainer container kubepods-besteffort-podf2b67990_36ca_4ec8_880d_f1719fe3938c.slice. Sep 4 17:34:04.596714 kubelet[2521]: I0904 17:34:04.596674 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2b67990-36ca-4ec8-880d-f1719fe3938c-calico-apiserver-certs\") pod \"calico-apiserver-76fdb8c5fd-kqkc4\" (UID: \"f2b67990-36ca-4ec8-880d-f1719fe3938c\") " pod="calico-apiserver/calico-apiserver-76fdb8c5fd-kqkc4" Sep 4 17:34:04.597484 kubelet[2521]: I0904 17:34:04.597457 2521 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5ss6\" (UniqueName: \"kubernetes.io/projected/f2b67990-36ca-4ec8-880d-f1719fe3938c-kube-api-access-c5ss6\") pod \"calico-apiserver-76fdb8c5fd-kqkc4\" (UID: \"f2b67990-36ca-4ec8-880d-f1719fe3938c\") " pod="calico-apiserver/calico-apiserver-76fdb8c5fd-kqkc4" Sep 4 17:34:04.784664 containerd[1436]: time="2024-09-04T17:34:04.784534988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fdb8c5fd-kqkc4,Uid:f2b67990-36ca-4ec8-880d-f1719fe3938c,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:34:04.923665 systemd-networkd[1375]: calif01a1fad188: Link UP Sep 4 17:34:04.923908 systemd-networkd[1375]: calif01a1fad188: Gained carrier Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.838 [INFO][5411] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0 calico-apiserver-76fdb8c5fd- calico-apiserver f2b67990-36ca-4ec8-880d-f1719fe3938c 1070 0 2024-09-04 17:34:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76fdb8c5fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76fdb8c5fd-kqkc4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif01a1fad188 [] []}} ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.838 [INFO][5411] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.873 [INFO][5425] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" HandleID="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Workload="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.886 [INFO][5425] ipam_plugin.go 270: Auto assigning IP ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" HandleID="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Workload="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003063f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76fdb8c5fd-kqkc4", "timestamp":"2024-09-04 17:34:04.873965113 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.886 [INFO][5425] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.886 [INFO][5425] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.886 [INFO][5425] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.892 [INFO][5425] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.896 [INFO][5425] ipam.go 372: Looking up existing affinities for host host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.902 [INFO][5425] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.904 [INFO][5425] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.907 [INFO][5425] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.907 [INFO][5425] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.908 [INFO][5425] ipam.go 1685: Creating new handle: k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3 Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.912 [INFO][5425] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.918 [INFO][5425] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.919 [INFO][5425] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" host="localhost" Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.919 [INFO][5425] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:34:04.938705 containerd[1436]: 2024-09-04 17:34:04.919 [INFO][5425] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" HandleID="k8s-pod-network.a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Workload="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" Sep 4 17:34:04.939524 containerd[1436]: 2024-09-04 17:34:04.922 [INFO][5411] k8s.go 386: Populated endpoint ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0", GenerateName:"calico-apiserver-76fdb8c5fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2b67990-36ca-4ec8-880d-f1719fe3938c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76fdb8c5fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76fdb8c5fd-kqkc4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif01a1fad188", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:04.939524 containerd[1436]: 2024-09-04 17:34:04.922 [INFO][5411] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" Sep 4 17:34:04.939524 containerd[1436]: 2024-09-04 17:34:04.922 [INFO][5411] dataplane_linux.go 68: Setting the host side veth name to calif01a1fad188 ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" Sep 4 17:34:04.939524 containerd[1436]: 2024-09-04 17:34:04.925 [INFO][5411] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" Sep 4 17:34:04.939524 containerd[1436]: 2024-09-04 17:34:04.925 [INFO][5411] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0", GenerateName:"calico-apiserver-76fdb8c5fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2b67990-36ca-4ec8-880d-f1719fe3938c", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76fdb8c5fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3", Pod:"calico-apiserver-76fdb8c5fd-kqkc4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif01a1fad188", MAC:"9e:ab:e1:e7:4b:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:34:04.939524 containerd[1436]: 2024-09-04 17:34:04.935 [INFO][5411] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3" Namespace="calico-apiserver" Pod="calico-apiserver-76fdb8c5fd-kqkc4" WorkloadEndpoint="localhost-k8s-calico--apiserver--76fdb8c5fd--kqkc4-eth0" Sep 4 17:34:04.966348 containerd[1436]: time="2024-09-04T17:34:04.966208912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:34:04.966348 containerd[1436]: time="2024-09-04T17:34:04.966286305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:04.966348 containerd[1436]: time="2024-09-04T17:34:04.966308143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:34:04.966348 containerd[1436]: time="2024-09-04T17:34:04.966319302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:34:04.998820 systemd[1]: Started cri-containerd-a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3.scope - libcontainer container a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3. Sep 4 17:34:05.011341 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 4 17:34:05.032887 containerd[1436]: time="2024-09-04T17:34:05.032810448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76fdb8c5fd-kqkc4,Uid:f2b67990-36ca-4ec8-880d-f1719fe3938c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3\"" Sep 4 17:34:05.035308 containerd[1436]: time="2024-09-04T17:34:05.034862720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:34:05.707277 kubelet[2521]: E0904 17:34:05.706923 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 4 17:34:06.506471 containerd[1436]: time="2024-09-04T17:34:06.506404746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:06.509743 containerd[1436]: time="2024-09-04T17:34:06.509649177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:34:06.510588 containerd[1436]: time="2024-09-04T17:34:06.510506951Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:06.515996 containerd[1436]: time="2024-09-04T17:34:06.515928575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:34:06.517386 containerd[1436]: time="2024-09-04T17:34:06.517236794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 1.482281482s" Sep 4 17:34:06.517386 containerd[1436]: time="2024-09-04T17:34:06.517273591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:34:06.520663 containerd[1436]: time="2024-09-04T17:34:06.520629373Z" level=info msg="CreateContainer within sandbox \"a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:34:06.558076 containerd[1436]: time="2024-09-04T17:34:06.557949626Z" level=info msg="CreateContainer within sandbox \"a5f9bee6db8907f776dfca44db7c0ea891e18159d06cc6018fc459ece1d67bf3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bb50afeeea88d68050ea9f95d8ecd1825b15e6b6120586a1b25231995cfa0396\"" Sep 4 17:34:06.559608 containerd[1436]: time="2024-09-04T17:34:06.558642133Z" level=info msg="StartContainer for \"bb50afeeea88d68050ea9f95d8ecd1825b15e6b6120586a1b25231995cfa0396\"" Sep 4 17:34:06.596776 systemd[1]: Started cri-containerd-bb50afeeea88d68050ea9f95d8ecd1825b15e6b6120586a1b25231995cfa0396.scope - libcontainer container bb50afeeea88d68050ea9f95d8ecd1825b15e6b6120586a1b25231995cfa0396. Sep 4 17:34:06.637132 containerd[1436]: time="2024-09-04T17:34:06.637086147Z" level=info msg="StartContainer for \"bb50afeeea88d68050ea9f95d8ecd1825b15e6b6120586a1b25231995cfa0396\" returns successfully" Sep 4 17:34:06.899156 systemd-networkd[1375]: calif01a1fad188: Gained IPv6LL Sep 4 17:34:06.995924 kubelet[2521]: I0904 17:34:06.995605 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76fdb8c5fd-kqkc4" podStartSLOduration=1.511048299 podStartE2EDuration="2.995568167s" podCreationTimestamp="2024-09-04 17:34:04 +0000 UTC" firstStartedPulling="2024-09-04 17:34:05.034557905 +0000 UTC m=+73.439679009" lastFinishedPulling="2024-09-04 17:34:06.519077773 +0000 UTC m=+74.924198877" observedRunningTime="2024-09-04 17:34:06.995343184 +0000 UTC m=+75.400464288" watchObservedRunningTime="2024-09-04 17:34:06.995568167 +0000 UTC m=+75.400689271" Sep 4 17:34:09.212470 systemd[1]: Started sshd@20-10.0.0.119:22-10.0.0.1:32926.service - OpenSSH per-connection server daemon (10.0.0.1:32926). Sep 4 17:34:09.285622 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 32926 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:34:09.288935 sshd[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:09.293313 systemd-logind[1418]: New session 21 of user core. Sep 4 17:34:09.306831 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:34:09.506145 sshd[5546]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:09.510006 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:34:09.511739 systemd[1]: sshd@20-10.0.0.119:22-10.0.0.1:32926.service: Deactivated successfully. Sep 4 17:34:09.515384 systemd-logind[1418]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:34:09.516541 systemd-logind[1418]: Removed session 21. Sep 4 17:34:14.516402 systemd[1]: Started sshd@21-10.0.0.119:22-10.0.0.1:41264.service - OpenSSH per-connection server daemon (10.0.0.1:41264). Sep 4 17:34:14.555203 sshd[5567]: Accepted publickey for core from 10.0.0.1 port 41264 ssh2: RSA SHA256:TcdII3DD+/vh6fGiZDuqtLwdsO9LHnvXRMQO7IdpdiA Sep 4 17:34:14.556351 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:34:14.560352 systemd-logind[1418]: New session 22 of user core. Sep 4 17:34:14.569215 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:34:14.759156 sshd[5567]: pam_unix(sshd:session): session closed for user core Sep 4 17:34:14.762917 systemd[1]: sshd@21-10.0.0.119:22-10.0.0.1:41264.service: Deactivated successfully. Sep 4 17:34:14.764656 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:34:14.766067 systemd-logind[1418]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:34:14.768062 systemd-logind[1418]: Removed session 22.