Jul 2 08:25:09.937645 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 08:25:09.937666 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 08:25:09.937676 kernel: KASLR enabled Jul 2 08:25:09.937682 kernel: efi: EFI v2.7 by EDK II Jul 2 08:25:09.937687 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb900018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 2 08:25:09.937693 kernel: random: crng init done Jul 2 08:25:09.937700 kernel: ACPI: Early table checksum verification disabled Jul 2 08:25:09.937719 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 2 08:25:09.937726 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 2 08:25:09.937734 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937740 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937746 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937752 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937758 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937765 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937773 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937779 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937785 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 08:25:09.937791 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 2 08:25:09.937798 kernel: NUMA: Failed to initialise from firmware Jul 2 08:25:09.937804 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:25:09.937810 kernel: NUMA: NODE_DATA [mem 0xdc95b800-0xdc960fff] Jul 2 08:25:09.937817 kernel: Zone ranges: Jul 2 08:25:09.937823 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:25:09.937829 kernel: DMA32 empty Jul 2 08:25:09.937837 kernel: Normal empty Jul 2 08:25:09.937843 kernel: Movable zone start for each node Jul 2 08:25:09.937849 kernel: Early memory node ranges Jul 2 08:25:09.937855 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 2 08:25:09.937862 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 2 08:25:09.937868 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 2 08:25:09.937874 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 2 08:25:09.937881 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 2 08:25:09.937887 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 2 08:25:09.937893 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 2 08:25:09.937900 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 2 08:25:09.937906 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 2 08:25:09.937914 kernel: psci: probing for conduit method from ACPI. Jul 2 08:25:09.937920 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 08:25:09.937926 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 08:25:09.937936 kernel: psci: Trusted OS migration not required Jul 2 08:25:09.937942 kernel: psci: SMC Calling Convention v1.1 Jul 2 08:25:09.937949 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 2 08:25:09.937957 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 08:25:09.937964 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 08:25:09.937971 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 2 08:25:09.937977 kernel: Detected PIPT I-cache on CPU0 Jul 2 08:25:09.937984 kernel: CPU features: detected: GIC system register CPU interface Jul 2 08:25:09.937991 kernel: CPU features: detected: Hardware dirty bit management Jul 2 08:25:09.937997 kernel: CPU features: detected: Spectre-v4 Jul 2 08:25:09.938004 kernel: CPU features: detected: Spectre-BHB Jul 2 08:25:09.938010 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 08:25:09.938017 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 08:25:09.938025 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 08:25:09.938031 kernel: alternatives: applying boot alternatives Jul 2 08:25:09.938039 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:25:09.938046 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 08:25:09.938053 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 08:25:09.938060 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 08:25:09.938066 kernel: Fallback order for Node 0: 0 Jul 2 08:25:09.938073 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 2 08:25:09.938080 kernel: Policy zone: DMA Jul 2 08:25:09.938087 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 08:25:09.938093 kernel: software IO TLB: area num 4. Jul 2 08:25:09.938102 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 2 08:25:09.938109 kernel: Memory: 2386864K/2572288K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 185424K reserved, 0K cma-reserved) Jul 2 08:25:09.938116 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 08:25:09.938122 kernel: trace event string verifier disabled Jul 2 08:25:09.938129 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 08:25:09.938136 kernel: rcu: RCU event tracing is enabled. Jul 2 08:25:09.938151 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 08:25:09.938158 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 08:25:09.938165 kernel: Tracing variant of Tasks RCU enabled. Jul 2 08:25:09.938172 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 08:25:09.938178 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 08:25:09.938185 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 08:25:09.938194 kernel: GICv3: 256 SPIs implemented Jul 2 08:25:09.938201 kernel: GICv3: 0 Extended SPIs implemented Jul 2 08:25:09.938207 kernel: Root IRQ handler: gic_handle_irq Jul 2 08:25:09.938214 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 08:25:09.938221 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 2 08:25:09.938227 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 2 08:25:09.938234 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 08:25:09.938241 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) Jul 2 08:25:09.938247 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 2 08:25:09.938254 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 2 08:25:09.938261 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 08:25:09.938269 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:25:09.938276 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 08:25:09.938283 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 08:25:09.938290 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 08:25:09.938297 kernel: arm-pv: using stolen time PV Jul 2 08:25:09.938304 kernel: Console: colour dummy device 80x25 Jul 2 08:25:09.938311 kernel: ACPI: Core revision 20230628 Jul 2 08:25:09.938318 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 08:25:09.938325 kernel: pid_max: default: 32768 minimum: 301 Jul 2 08:25:09.938331 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 08:25:09.938340 kernel: SELinux: Initializing. Jul 2 08:25:09.938346 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:25:09.938353 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 08:25:09.938360 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:25:09.938367 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1. Jul 2 08:25:09.938374 kernel: rcu: Hierarchical SRCU implementation. Jul 2 08:25:09.938381 kernel: rcu: Max phase no-delay instances is 400. Jul 2 08:25:09.938387 kernel: Platform MSI: ITS@0x8080000 domain created Jul 2 08:25:09.938394 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 2 08:25:09.938402 kernel: Remapping and enabling EFI services. Jul 2 08:25:09.938409 kernel: smp: Bringing up secondary CPUs ... Jul 2 08:25:09.938415 kernel: Detected PIPT I-cache on CPU1 Jul 2 08:25:09.938422 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 2 08:25:09.938429 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 2 08:25:09.938436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:25:09.938443 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 08:25:09.938450 kernel: Detected PIPT I-cache on CPU2 Jul 2 08:25:09.938457 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 2 08:25:09.938464 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 2 08:25:09.938472 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:25:09.938479 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 2 08:25:09.938490 kernel: Detected PIPT I-cache on CPU3 Jul 2 08:25:09.938499 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 2 08:25:09.938506 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 2 08:25:09.938513 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 08:25:09.938521 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 2 08:25:09.938528 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 08:25:09.938535 kernel: SMP: Total of 4 processors activated. Jul 2 08:25:09.938544 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 08:25:09.938551 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 08:25:09.938558 kernel: CPU features: detected: Common not Private translations Jul 2 08:25:09.938566 kernel: CPU features: detected: CRC32 instructions Jul 2 08:25:09.938573 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 2 08:25:09.938580 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 08:25:09.938587 kernel: CPU features: detected: LSE atomic instructions Jul 2 08:25:09.938594 kernel: CPU features: detected: Privileged Access Never Jul 2 08:25:09.938603 kernel: CPU features: detected: RAS Extension Support Jul 2 08:25:09.938610 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 2 08:25:09.938617 kernel: CPU: All CPU(s) started at EL1 Jul 2 08:25:09.938624 kernel: alternatives: applying system-wide alternatives Jul 2 08:25:09.938631 kernel: devtmpfs: initialized Jul 2 08:25:09.938638 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 08:25:09.938646 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 08:25:09.938653 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 08:25:09.938660 kernel: SMBIOS 3.0.0 present. Jul 2 08:25:09.938669 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 2 08:25:09.938676 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 08:25:09.938683 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 08:25:09.938690 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 08:25:09.938698 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 08:25:09.938727 kernel: audit: initializing netlink subsys (disabled) Jul 2 08:25:09.938735 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 2 08:25:09.938743 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 08:25:09.938760 kernel: cpuidle: using governor menu Jul 2 08:25:09.938770 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 08:25:09.938777 kernel: ASID allocator initialised with 32768 entries Jul 2 08:25:09.938785 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 08:25:09.938792 kernel: Serial: AMBA PL011 UART driver Jul 2 08:25:09.938799 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 08:25:09.938806 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 08:25:09.938813 kernel: Modules: 509120 pages in range for PLT usage Jul 2 08:25:09.938820 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 08:25:09.938828 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 08:25:09.938836 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 08:25:09.938844 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 08:25:09.938851 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 08:25:09.938858 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 08:25:09.938865 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 08:25:09.938872 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 08:25:09.938879 kernel: ACPI: Added _OSI(Module Device) Jul 2 08:25:09.938886 kernel: ACPI: Added _OSI(Processor Device) Jul 2 08:25:09.938894 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 08:25:09.938902 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 08:25:09.938910 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 08:25:09.938917 kernel: ACPI: Interpreter enabled Jul 2 08:25:09.938924 kernel: ACPI: Using GIC for interrupt routing Jul 2 08:25:09.938931 kernel: ACPI: MCFG table detected, 1 entries Jul 2 08:25:09.938938 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 2 08:25:09.938945 kernel: printk: console [ttyAMA0] enabled Jul 2 08:25:09.938953 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 08:25:09.939079 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 08:25:09.939160 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 08:25:09.939228 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 08:25:09.939292 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 2 08:25:09.939356 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 2 08:25:09.939365 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 2 08:25:09.939373 kernel: PCI host bridge to bus 0000:00 Jul 2 08:25:09.939442 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 2 08:25:09.939505 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 08:25:09.939564 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 2 08:25:09.939622 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 08:25:09.939699 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 2 08:25:09.939793 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 08:25:09.939861 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 2 08:25:09.939931 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 2 08:25:09.939997 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 08:25:09.940061 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 2 08:25:09.940125 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 2 08:25:09.940198 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 2 08:25:09.940259 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 2 08:25:09.940317 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 08:25:09.940381 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 2 08:25:09.940391 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 08:25:09.940399 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 08:25:09.940406 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 08:25:09.940413 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 08:25:09.940420 kernel: iommu: Default domain type: Translated Jul 2 08:25:09.940428 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 08:25:09.940435 kernel: efivars: Registered efivars operations Jul 2 08:25:09.940442 kernel: vgaarb: loaded Jul 2 08:25:09.940451 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 08:25:09.940459 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 08:25:09.940466 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 08:25:09.940473 kernel: pnp: PnP ACPI init Jul 2 08:25:09.940544 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 2 08:25:09.940555 kernel: pnp: PnP ACPI: found 1 devices Jul 2 08:25:09.940562 kernel: NET: Registered PF_INET protocol family Jul 2 08:25:09.940569 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 08:25:09.940579 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 08:25:09.940586 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 08:25:09.940594 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 08:25:09.940601 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 08:25:09.940609 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 08:25:09.940616 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:25:09.940623 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 08:25:09.940630 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 08:25:09.940638 kernel: PCI: CLS 0 bytes, default 64 Jul 2 08:25:09.940646 kernel: kvm [1]: HYP mode not available Jul 2 08:25:09.940653 kernel: Initialise system trusted keyrings Jul 2 08:25:09.940660 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 08:25:09.940667 kernel: Key type asymmetric registered Jul 2 08:25:09.940674 kernel: Asymmetric key parser 'x509' registered Jul 2 08:25:09.940682 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 08:25:09.940689 kernel: io scheduler mq-deadline registered Jul 2 08:25:09.940696 kernel: io scheduler kyber registered Jul 2 08:25:09.940703 kernel: io scheduler bfq registered Jul 2 08:25:09.940720 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 08:25:09.940728 kernel: ACPI: button: Power Button [PWRB] Jul 2 08:25:09.940735 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 08:25:09.940802 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 2 08:25:09.940812 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 08:25:09.940819 kernel: thunder_xcv, ver 1.0 Jul 2 08:25:09.940827 kernel: thunder_bgx, ver 1.0 Jul 2 08:25:09.940834 kernel: nicpf, ver 1.0 Jul 2 08:25:09.940842 kernel: nicvf, ver 1.0 Jul 2 08:25:09.940918 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 08:25:09.940981 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T08:25:09 UTC (1719908709) Jul 2 08:25:09.940990 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 08:25:09.940998 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 2 08:25:09.941005 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 08:25:09.941012 kernel: watchdog: Hard watchdog permanently disabled Jul 2 08:25:09.941020 kernel: NET: Registered PF_INET6 protocol family Jul 2 08:25:09.941027 kernel: Segment Routing with IPv6 Jul 2 08:25:09.941036 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 08:25:09.941043 kernel: NET: Registered PF_PACKET protocol family Jul 2 08:25:09.941050 kernel: Key type dns_resolver registered Jul 2 08:25:09.941057 kernel: registered taskstats version 1 Jul 2 08:25:09.941064 kernel: Loading compiled-in X.509 certificates Jul 2 08:25:09.941072 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 08:25:09.941079 kernel: Key type .fscrypt registered Jul 2 08:25:09.941086 kernel: Key type fscrypt-provisioning registered Jul 2 08:25:09.941094 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 08:25:09.941102 kernel: ima: Allocated hash algorithm: sha1 Jul 2 08:25:09.941109 kernel: ima: No architecture policies found Jul 2 08:25:09.941117 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 08:25:09.941124 kernel: clk: Disabling unused clocks Jul 2 08:25:09.941131 kernel: Freeing unused kernel memory: 39040K Jul 2 08:25:09.941138 kernel: Run /init as init process Jul 2 08:25:09.941151 kernel: with arguments: Jul 2 08:25:09.941159 kernel: /init Jul 2 08:25:09.941166 kernel: with environment: Jul 2 08:25:09.941175 kernel: HOME=/ Jul 2 08:25:09.941182 kernel: TERM=linux Jul 2 08:25:09.941190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 08:25:09.941199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:25:09.941208 systemd[1]: Detected virtualization kvm. Jul 2 08:25:09.941216 systemd[1]: Detected architecture arm64. Jul 2 08:25:09.941224 systemd[1]: Running in initrd. Jul 2 08:25:09.941232 systemd[1]: No hostname configured, using default hostname. Jul 2 08:25:09.941240 systemd[1]: Hostname set to <localhost>. Jul 2 08:25:09.941248 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:25:09.941256 systemd[1]: Queued start job for default target initrd.target. Jul 2 08:25:09.941263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:25:09.941271 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:25:09.941279 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 08:25:09.941287 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:25:09.941296 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 08:25:09.941305 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 08:25:09.941314 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 08:25:09.941322 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 08:25:09.941330 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:25:09.941338 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:25:09.941346 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:25:09.941355 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:25:09.941363 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:25:09.941371 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:25:09.941379 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:25:09.941386 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:25:09.941394 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 08:25:09.941402 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 08:25:09.941411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:25:09.941419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:25:09.941442 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:25:09.941450 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:25:09.941458 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 08:25:09.941466 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:25:09.941473 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 08:25:09.941481 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 08:25:09.941489 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:25:09.941496 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:25:09.941506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:25:09.941514 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 08:25:09.941522 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:25:09.941529 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 08:25:09.941538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:25:09.941547 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:25:09.941571 systemd-journald[238]: Collecting audit messages is disabled. Jul 2 08:25:09.941590 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 08:25:09.941597 kernel: Bridge firewalling registered Jul 2 08:25:09.941606 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:25:09.941614 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:25:09.941622 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:25:09.941631 systemd-journald[238]: Journal started Jul 2 08:25:09.941649 systemd-journald[238]: Runtime Journal (/run/log/journal/6f3b53f6b752434aaa519f0f7ed1ec7d) is 5.9M, max 47.3M, 41.4M free. Jul 2 08:25:09.916560 systemd-modules-load[239]: Inserted module 'overlay' Jul 2 08:25:09.934904 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 2 08:25:09.944530 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:25:09.947036 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:25:09.950062 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:25:09.952777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:25:09.956689 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:25:09.958918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:25:09.960650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:25:09.963612 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:25:09.972933 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 08:25:09.975018 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:25:09.984806 dracut-cmdline[276]: dracut-dracut-053 Jul 2 08:25:09.987414 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=19e11d11f09b621c4c7d739b39b57f4bac8caa3f9723d7ceb0e9d7c7445769b7 Jul 2 08:25:10.001172 systemd-resolved[279]: Positive Trust Anchors: Jul 2 08:25:10.001189 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:25:10.001220 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:25:10.007151 systemd-resolved[279]: Defaulting to hostname 'linux'. Jul 2 08:25:10.008127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:25:10.009046 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:25:10.058747 kernel: SCSI subsystem initialized Jul 2 08:25:10.063734 kernel: Loading iSCSI transport class v2.0-870. Jul 2 08:25:10.071778 kernel: iscsi: registered transport (tcp) Jul 2 08:25:10.084912 kernel: iscsi: registered transport (qla4xxx) Jul 2 08:25:10.084965 kernel: QLogic iSCSI HBA Driver Jul 2 08:25:10.135237 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 08:25:10.146866 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 08:25:10.164384 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 08:25:10.164431 kernel: device-mapper: uevent: version 1.0.3 Jul 2 08:25:10.164442 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 08:25:10.215745 kernel: raid6: neonx8 gen() 14646 MB/s Jul 2 08:25:10.232735 kernel: raid6: neonx4 gen() 14583 MB/s Jul 2 08:25:10.249731 kernel: raid6: neonx2 gen() 11998 MB/s Jul 2 08:25:10.266724 kernel: raid6: neonx1 gen() 10226 MB/s Jul 2 08:25:10.283729 kernel: raid6: int64x8 gen() 6770 MB/s Jul 2 08:25:10.300726 kernel: raid6: int64x4 gen() 7333 MB/s Jul 2 08:25:10.317722 kernel: raid6: int64x2 gen() 6123 MB/s Jul 2 08:25:10.334724 kernel: raid6: int64x1 gen() 5049 MB/s Jul 2 08:25:10.334751 kernel: raid6: using algorithm neonx8 gen() 14646 MB/s Jul 2 08:25:10.351728 kernel: raid6: .... xor() 11927 MB/s, rmw enabled Jul 2 08:25:10.351745 kernel: raid6: using neon recovery algorithm Jul 2 08:25:10.356726 kernel: xor: measuring software checksum speed Jul 2 08:25:10.356747 kernel: 8regs : 19864 MB/sec Jul 2 08:25:10.357975 kernel: 32regs : 19635 MB/sec Jul 2 08:25:10.359205 kernel: arm64_neon : 27170 MB/sec Jul 2 08:25:10.359226 kernel: xor: using function: arm64_neon (27170 MB/sec) Jul 2 08:25:10.412777 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 08:25:10.424468 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:25:10.435862 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:25:10.448743 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 2 08:25:10.451820 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:25:10.462972 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 08:25:10.477323 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 2 08:25:10.512402 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:25:10.524892 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:25:10.565121 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:25:10.572878 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 08:25:10.582995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 08:25:10.585570 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:25:10.587459 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:25:10.588740 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:25:10.602867 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 08:25:10.606731 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 2 08:25:10.614097 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 08:25:10.614214 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 08:25:10.614226 kernel: GPT:9289727 != 19775487 Jul 2 08:25:10.614235 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 08:25:10.614244 kernel: GPT:9289727 != 19775487 Jul 2 08:25:10.614258 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 08:25:10.614267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:25:10.613507 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:25:10.616974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:25:10.617031 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:25:10.621019 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:25:10.622053 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:25:10.622109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:25:10.624098 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:25:10.632921 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:25:10.636738 kernel: BTRFS: device fsid 9b0eb482-485a-4aff-8de4-e09ff146eadf devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (507) Jul 2 08:25:10.637727 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (508) Jul 2 08:25:10.643131 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 08:25:10.644291 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:25:10.649491 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 08:25:10.658227 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 08:25:10.659294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 08:25:10.664949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 08:25:10.675852 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 08:25:10.677527 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 08:25:10.682765 disk-uuid[553]: Primary Header is updated. Jul 2 08:25:10.682765 disk-uuid[553]: Secondary Entries is updated. Jul 2 08:25:10.682765 disk-uuid[553]: Secondary Header is updated. Jul 2 08:25:10.687737 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:25:10.697144 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:25:11.699726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 08:25:11.700673 disk-uuid[555]: The operation has completed successfully. Jul 2 08:25:11.729090 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 08:25:11.729200 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 08:25:11.750247 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 08:25:11.753878 sh[577]: Success Jul 2 08:25:11.770325 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 08:25:11.804866 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 08:25:11.817012 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 08:25:11.818928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 08:25:11.832348 kernel: BTRFS info (device dm-0): first mount of filesystem 9b0eb482-485a-4aff-8de4-e09ff146eadf Jul 2 08:25:11.832392 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:25:11.832402 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 08:25:11.832412 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 08:25:11.833303 kernel: BTRFS info (device dm-0): using free space tree Jul 2 08:25:11.837093 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 08:25:11.837884 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 08:25:11.846892 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 08:25:11.848179 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 08:25:11.854869 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:25:11.854911 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:25:11.854928 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:25:11.858753 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:25:11.866316 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 08:25:11.867897 kernel: BTRFS info (device vda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:25:11.876529 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 08:25:11.882872 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 08:25:11.960464 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:25:11.973905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:25:11.982968 ignition[665]: Ignition 2.18.0 Jul 2 08:25:11.982981 ignition[665]: Stage: fetch-offline Jul 2 08:25:11.983026 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:25:11.983034 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:25:11.983123 ignition[665]: parsed url from cmdline: "" Jul 2 08:25:11.983126 ignition[665]: no config URL provided Jul 2 08:25:11.983130 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 08:25:11.983145 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jul 2 08:25:11.983169 ignition[665]: op(1): [started] loading QEMU firmware config module Jul 2 08:25:11.983180 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 08:25:11.996522 systemd-networkd[769]: lo: Link UP Jul 2 08:25:11.996532 systemd-networkd[769]: lo: Gained carrier Jul 2 08:25:11.996797 ignition[665]: op(1): [finished] loading QEMU firmware config module Jul 2 08:25:12.000367 systemd-networkd[769]: Enumeration completed Jul 2 08:25:12.000476 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:25:12.000779 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:25:12.000782 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:25:12.001981 systemd-networkd[769]: eth0: Link UP Jul 2 08:25:12.001984 systemd-networkd[769]: eth0: Gained carrier Jul 2 08:25:12.001990 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:25:12.003347 systemd[1]: Reached target network.target - Network. Jul 2 08:25:12.021757 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 08:25:12.046977 ignition[665]: parsing config with SHA512: 34c7144b160a25b063d0c640b204220b17528ecc2d327b226a4261133c0b55824b9507c6a1a3019f4a6f6a7285cc7eaa498c4d0d6fc1d7501ce2930890b2fd0c Jul 2 08:25:12.052226 unknown[665]: fetched base config from "system" Jul 2 08:25:12.052739 ignition[665]: fetch-offline: fetch-offline passed Jul 2 08:25:12.052237 unknown[665]: fetched user config from "qemu" Jul 2 08:25:12.052802 ignition[665]: Ignition finished successfully Jul 2 08:25:12.055219 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:25:12.057008 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 08:25:12.064899 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 08:25:12.078223 ignition[776]: Ignition 2.18.0 Jul 2 08:25:12.078233 ignition[776]: Stage: kargs Jul 2 08:25:12.078386 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:25:12.078395 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:25:12.079253 ignition[776]: kargs: kargs passed Jul 2 08:25:12.079300 ignition[776]: Ignition finished successfully Jul 2 08:25:12.084738 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 08:25:12.100894 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 08:25:12.111882 ignition[785]: Ignition 2.18.0 Jul 2 08:25:12.112151 ignition[785]: Stage: disks Jul 2 08:25:12.112337 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jul 2 08:25:12.112347 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:25:12.113234 ignition[785]: disks: disks passed Jul 2 08:25:12.113279 ignition[785]: Ignition finished successfully Jul 2 08:25:12.116255 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 08:25:12.119291 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 08:25:12.120196 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 08:25:12.121125 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:25:12.125183 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:25:12.126635 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:25:12.136862 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 08:25:12.148868 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 08:25:12.153810 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 08:25:12.171831 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 08:25:12.219719 kernel: EXT4-fs (vda9): mounted filesystem 9aacfbff-cef8-4758-afb5-6310e7c6c5e6 r/w with ordered data mode. Quota mode: none. Jul 2 08:25:12.219997 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 08:25:12.221051 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 08:25:12.235825 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:25:12.237486 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 08:25:12.238770 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 08:25:12.238814 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 08:25:12.238836 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:25:12.247891 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (803) Jul 2 08:25:12.247914 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:25:12.247932 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:25:12.245281 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 08:25:12.250608 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:25:12.246965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 08:25:12.252115 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:25:12.253592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:25:12.296155 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 08:25:12.300416 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jul 2 08:25:12.304411 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 08:25:12.308641 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 08:25:12.381341 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 08:25:12.396890 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 08:25:12.398381 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 08:25:12.403734 kernel: BTRFS info (device vda6): last unmount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:25:12.420278 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 08:25:12.422159 ignition[915]: INFO : Ignition 2.18.0 Jul 2 08:25:12.422159 ignition[915]: INFO : Stage: mount Jul 2 08:25:12.424303 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:25:12.424303 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:25:12.424303 ignition[915]: INFO : mount: mount passed Jul 2 08:25:12.424303 ignition[915]: INFO : Ignition finished successfully Jul 2 08:25:12.424414 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 08:25:12.434842 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 08:25:12.828930 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 08:25:12.837909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 08:25:12.846418 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (930) Jul 2 08:25:12.846466 kernel: BTRFS info (device vda6): first mount of filesystem d9ea85ee-de2c-4ecb-9edd-179b77e44483 Jul 2 08:25:12.846478 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 08:25:12.847141 kernel: BTRFS info (device vda6): using free space tree Jul 2 08:25:12.849745 kernel: BTRFS info (device vda6): auto enabling async discard Jul 2 08:25:12.850761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 08:25:12.868441 ignition[947]: INFO : Ignition 2.18.0 Jul 2 08:25:12.868441 ignition[947]: INFO : Stage: files Jul 2 08:25:12.869656 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:25:12.869656 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:25:12.869656 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jul 2 08:25:12.872203 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 08:25:12.872203 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 08:25:12.874261 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 08:25:12.874261 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 08:25:12.876476 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 08:25:12.876476 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:25:12.876476 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 08:25:12.874293 unknown[947]: wrote ssh authorized keys file for user: core Jul 2 08:25:12.916329 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 08:25:12.957236 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 08:25:12.957236 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:25:12.961114 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 08:25:13.129879 systemd-networkd[769]: eth0: Gained IPv6LL Jul 2 08:25:13.273828 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 08:25:13.518738 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 08:25:13.520380 ignition[947]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 08:25:13.547285 ignition[947]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:25:13.572777 ignition[947]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 08:25:13.574934 ignition[947]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 08:25:13.574934 ignition[947]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 08:25:13.574934 ignition[947]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 08:25:13.574934 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:25:13.574934 ignition[947]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 08:25:13.574934 ignition[947]: INFO : files: files passed Jul 2 08:25:13.574934 ignition[947]: INFO : Ignition finished successfully Jul 2 08:25:13.575338 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 08:25:13.585915 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 08:25:13.587410 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 08:25:13.592330 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 08:25:13.592438 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 08:25:13.596261 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 08:25:13.598222 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:25:13.598222 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:25:13.600987 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 08:25:13.601116 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:25:13.603337 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 08:25:13.614982 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 08:25:13.635974 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 08:25:13.636104 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 08:25:13.638041 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 08:25:13.642608 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 08:25:13.644266 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 08:25:13.645286 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 08:25:13.661102 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:25:13.672948 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 08:25:13.681339 systemd[1]: Stopped target network.target - Network. Jul 2 08:25:13.682197 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:25:13.683582 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:25:13.685467 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 08:25:13.687644 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 08:25:13.687792 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 08:25:13.690211 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 08:25:13.691766 systemd[1]: Stopped target basic.target - Basic System. Jul 2 08:25:13.693367 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 08:25:13.694790 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 08:25:13.696534 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 08:25:13.698347 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 08:25:13.700095 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 08:25:13.701778 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 08:25:13.703483 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 08:25:13.704945 systemd[1]: Stopped target swap.target - Swaps. Jul 2 08:25:13.706327 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 08:25:13.706457 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 08:25:13.708509 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:25:13.709551 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:25:13.711208 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 08:25:13.711291 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:25:13.712804 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 08:25:13.712925 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 08:25:13.715377 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 08:25:13.715529 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 08:25:13.717475 systemd[1]: Stopped target paths.target - Path Units. Jul 2 08:25:13.718736 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 08:25:13.722753 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:25:13.724221 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 08:25:13.725997 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 08:25:13.727365 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 08:25:13.727454 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 08:25:13.728700 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 08:25:13.728790 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 08:25:13.730066 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 08:25:13.730183 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 08:25:13.732071 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 08:25:13.732187 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 08:25:13.743983 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 08:25:13.745477 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 08:25:13.746410 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 08:25:13.747829 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 08:25:13.749519 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 08:25:13.749658 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:25:13.751621 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 08:25:13.751750 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 08:25:13.755624 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 08:25:13.755811 systemd-networkd[769]: eth0: DHCPv6 lease lost Jul 2 08:25:13.755868 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 08:25:13.759818 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 08:25:13.759957 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 08:25:13.762782 ignition[1002]: INFO : Ignition 2.18.0 Jul 2 08:25:13.762782 ignition[1002]: INFO : Stage: umount Jul 2 08:25:13.762782 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 08:25:13.762782 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 08:25:13.762782 ignition[1002]: INFO : umount: umount passed Jul 2 08:25:13.762782 ignition[1002]: INFO : Ignition finished successfully Jul 2 08:25:13.762733 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 08:25:13.765342 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 08:25:13.765454 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 08:25:13.770898 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 08:25:13.770945 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:25:13.780862 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 08:25:13.781523 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 08:25:13.781586 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 08:25:13.783163 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 08:25:13.783209 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:25:13.784440 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 08:25:13.784477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 08:25:13.786066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 08:25:13.786105 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:25:13.788093 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 08:25:13.788219 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 08:25:13.790794 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 08:25:13.790846 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 08:25:13.791829 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 08:25:13.791869 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 08:25:13.793427 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 08:25:13.793469 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 08:25:13.794598 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 08:25:13.794633 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 08:25:13.796106 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:25:13.799159 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 08:25:13.799430 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 08:25:13.815622 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 08:25:13.815786 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:25:13.817606 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 08:25:13.817778 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 08:25:13.819635 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 08:25:13.819726 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 08:25:13.821891 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 08:25:13.821948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:25:13.822697 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 08:25:13.822755 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 08:25:13.825370 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 08:25:13.825410 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 08:25:13.826878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 08:25:13.826926 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 08:25:13.829053 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 08:25:13.829100 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 08:25:13.836852 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 08:25:13.837614 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 08:25:13.837674 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:25:13.839256 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 08:25:13.839297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:25:13.840754 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 08:25:13.840793 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:25:13.842381 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 08:25:13.842420 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:25:13.844230 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 08:25:13.845770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 08:25:13.847785 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 08:25:13.849816 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 08:25:13.860931 systemd[1]: Switching root. Jul 2 08:25:13.888625 systemd-journald[238]: Journal stopped Jul 2 08:25:14.582792 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 2 08:25:14.582853 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 08:25:14.582867 kernel: SELinux: policy capability open_perms=1 Jul 2 08:25:14.582877 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 08:25:14.582891 kernel: SELinux: policy capability always_check_network=0 Jul 2 08:25:14.582900 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 08:25:14.582910 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 08:25:14.582922 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 08:25:14.582931 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 08:25:14.582941 kernel: audit: type=1403 audit(1719908714.049:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 08:25:14.582951 systemd[1]: Successfully loaded SELinux policy in 35.560ms. Jul 2 08:25:14.582964 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.193ms. Jul 2 08:25:14.582975 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 08:25:14.582986 systemd[1]: Detected virtualization kvm. Jul 2 08:25:14.582996 systemd[1]: Detected architecture arm64. Jul 2 08:25:14.583008 systemd[1]: Detected first boot. Jul 2 08:25:14.583022 systemd[1]: Initializing machine ID from VM UUID. Jul 2 08:25:14.583033 zram_generator::config[1046]: No configuration found. Jul 2 08:25:14.583044 systemd[1]: Populated /etc with preset unit settings. Jul 2 08:25:14.583054 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 08:25:14.583064 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 08:25:14.583075 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 08:25:14.583086 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 08:25:14.583097 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 08:25:14.583110 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 08:25:14.583120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 08:25:14.583138 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 08:25:14.583150 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 08:25:14.583161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 08:25:14.583171 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 08:25:14.583181 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 08:25:14.583192 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 08:25:14.583203 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 08:25:14.583218 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 08:25:14.583229 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 08:25:14.583240 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 08:25:14.583250 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 08:25:14.583261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 08:25:14.583271 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 08:25:14.583281 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 08:25:14.583291 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 08:25:14.583303 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 08:25:14.583314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 08:25:14.583325 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 08:25:14.583336 systemd[1]: Reached target slices.target - Slice Units. Jul 2 08:25:14.583347 systemd[1]: Reached target swap.target - Swaps. Jul 2 08:25:14.583357 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 08:25:14.583368 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 08:25:14.583377 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 08:25:14.583389 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 08:25:14.583399 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 08:25:14.583410 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 08:25:14.583420 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 08:25:14.583430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 08:25:14.583441 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 08:25:14.583451 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 08:25:14.583462 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 08:25:14.583472 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 08:25:14.583485 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 08:25:14.583495 systemd[1]: Reached target machines.target - Containers. Jul 2 08:25:14.583505 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 08:25:14.583515 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:25:14.583525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 08:25:14.583536 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 08:25:14.583546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:25:14.583557 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:25:14.583569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:25:14.583579 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 08:25:14.583590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:25:14.583600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 08:25:14.583610 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 08:25:14.583620 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 08:25:14.583630 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 08:25:14.583641 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 08:25:14.583650 kernel: loop: module loaded Jul 2 08:25:14.583662 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 08:25:14.583672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 08:25:14.583682 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 08:25:14.583693 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 08:25:14.583703 kernel: fuse: init (API version 7.39) Jul 2 08:25:14.583721 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 08:25:14.583731 kernel: ACPI: bus type drm_connector registered Jul 2 08:25:14.583759 systemd-journald[1109]: Collecting audit messages is disabled. Jul 2 08:25:14.583782 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 08:25:14.583793 systemd[1]: Stopped verity-setup.service. Jul 2 08:25:14.583804 systemd-journald[1109]: Journal started Jul 2 08:25:14.583828 systemd-journald[1109]: Runtime Journal (/run/log/journal/6f3b53f6b752434aaa519f0f7ed1ec7d) is 5.9M, max 47.3M, 41.4M free. Jul 2 08:25:14.411665 systemd[1]: Queued start job for default target multi-user.target. Jul 2 08:25:14.428998 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 08:25:14.429442 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 08:25:14.587376 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 08:25:14.586990 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 08:25:14.588006 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 08:25:14.589217 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 08:25:14.590158 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 08:25:14.591692 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 08:25:14.592639 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 08:25:14.593633 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 08:25:14.595117 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 08:25:14.595268 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 08:25:14.596509 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:25:14.596664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:25:14.597787 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:25:14.597941 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:25:14.598988 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 08:25:14.600205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:25:14.600339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:25:14.602210 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 08:25:14.602358 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 08:25:14.603411 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:25:14.603541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:25:14.604615 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 08:25:14.605922 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 08:25:14.607206 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 08:25:14.620877 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 08:25:14.629825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 08:25:14.632725 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 08:25:14.633695 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 08:25:14.633822 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 08:25:14.635634 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 08:25:14.639563 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 08:25:14.642762 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 08:25:14.643795 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:25:14.645660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 08:25:14.647882 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 08:25:14.648775 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:25:14.653038 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 08:25:14.656237 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:25:14.657068 systemd-journald[1109]: Time spent on flushing to /var/log/journal/6f3b53f6b752434aaa519f0f7ed1ec7d is 15.987ms for 852 entries. Jul 2 08:25:14.657068 systemd-journald[1109]: System Journal (/var/log/journal/6f3b53f6b752434aaa519f0f7ed1ec7d) is 8.0M, max 195.6M, 187.6M free. Jul 2 08:25:14.696483 systemd-journald[1109]: Received client request to flush runtime journal. Jul 2 08:25:14.657997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 08:25:14.660162 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 08:25:14.662230 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 08:25:14.667800 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 08:25:14.669290 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 08:25:14.677292 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 08:25:14.678636 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 08:25:14.680187 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 08:25:14.685554 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 08:25:14.693182 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 08:25:14.700079 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 08:25:14.702759 kernel: loop0: detected capacity change from 0 to 113672 Jul 2 08:25:14.702837 kernel: block loop0: the capability attribute has been deprecated. Jul 2 08:25:14.706084 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 08:25:14.710750 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 2 08:25:14.710776 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jul 2 08:25:14.715386 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 08:25:14.717872 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 08:25:14.726738 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 08:25:14.736239 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 08:25:14.738009 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 08:25:14.739354 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 08:25:14.742067 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 08:25:14.749738 kernel: loop1: detected capacity change from 0 to 59672 Jul 2 08:25:14.756488 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 08:25:14.764891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 08:25:14.776737 kernel: loop2: detected capacity change from 0 to 194096 Jul 2 08:25:14.778679 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 2 08:25:14.778699 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Jul 2 08:25:14.782748 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 08:25:14.836741 kernel: loop3: detected capacity change from 0 to 113672 Jul 2 08:25:14.841740 kernel: loop4: detected capacity change from 0 to 59672 Jul 2 08:25:14.846732 kernel: loop5: detected capacity change from 0 to 194096 Jul 2 08:25:14.851110 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 08:25:14.851518 (sd-merge)[1187]: Merged extensions into '/usr'. Jul 2 08:25:14.855938 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 08:25:14.855957 systemd[1]: Reloading... Jul 2 08:25:14.911739 zram_generator::config[1212]: No configuration found. Jul 2 08:25:14.934260 ldconfig[1152]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 08:25:15.008370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:25:15.045930 systemd[1]: Reloading finished in 189 ms. Jul 2 08:25:15.076312 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 08:25:15.080360 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 08:25:15.093942 systemd[1]: Starting ensure-sysext.service... Jul 2 08:25:15.097205 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 08:25:15.106734 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Jul 2 08:25:15.106752 systemd[1]: Reloading... Jul 2 08:25:15.118937 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 08:25:15.119195 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 08:25:15.119851 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 08:25:15.120066 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 2 08:25:15.120110 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jul 2 08:25:15.122275 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:25:15.122288 systemd-tmpfiles[1247]: Skipping /boot Jul 2 08:25:15.129001 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 08:25:15.129016 systemd-tmpfiles[1247]: Skipping /boot Jul 2 08:25:15.158753 zram_generator::config[1272]: No configuration found. Jul 2 08:25:15.239868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:25:15.276936 systemd[1]: Reloading finished in 169 ms. Jul 2 08:25:15.291573 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 08:25:15.301037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 08:25:15.308042 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:25:15.310288 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 08:25:15.312446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 08:25:15.316289 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 08:25:15.319153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 08:25:15.322982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 08:25:15.327853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:25:15.331960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:25:15.335893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:25:15.343984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:25:15.347948 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:25:15.348741 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 08:25:15.350526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:25:15.350673 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:25:15.352310 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:25:15.352425 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:25:15.354089 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:25:15.354210 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:25:15.361828 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:25:15.364464 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Jul 2 08:25:15.369182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:25:15.374970 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:25:15.377677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:25:15.380900 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:25:15.384967 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 08:25:15.388029 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 08:25:15.389689 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 08:25:15.392092 augenrules[1341]: No rules Jul 2 08:25:15.393744 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 08:25:15.396241 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:25:15.397518 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 08:25:15.399578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:25:15.399703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:25:15.401406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:25:15.402871 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:25:15.404292 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:25:15.404437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:25:15.405677 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 08:25:15.417889 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1356) Jul 2 08:25:15.434725 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1359) Jul 2 08:25:15.429619 systemd[1]: Finished ensure-sysext.service. Jul 2 08:25:15.437078 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 08:25:15.443221 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 08:25:15.453977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 08:25:15.456127 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 08:25:15.459836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 08:25:15.463686 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 08:25:15.465683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 08:25:15.468811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 08:25:15.473880 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 08:25:15.477551 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 08:25:15.478120 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 08:25:15.478294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 08:25:15.479594 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 08:25:15.481374 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 08:25:15.482552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 08:25:15.482673 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 08:25:15.485149 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 08:25:15.489014 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 08:25:15.489156 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 08:25:15.502268 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 08:25:15.505578 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 08:25:15.508066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 08:25:15.508150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 08:25:15.524014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 08:25:15.544835 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 08:25:15.548437 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 08:25:15.558654 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 08:25:15.560495 systemd-networkd[1384]: lo: Link UP Jul 2 08:25:15.560499 systemd-networkd[1384]: lo: Gained carrier Jul 2 08:25:15.561225 systemd-networkd[1384]: Enumeration completed Jul 2 08:25:15.561344 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 08:25:15.561789 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:25:15.561792 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 08:25:15.562380 systemd-networkd[1384]: eth0: Link UP Jul 2 08:25:15.562389 systemd-networkd[1384]: eth0: Gained carrier Jul 2 08:25:15.562401 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 08:25:15.563480 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 08:25:15.574745 systemd-resolved[1313]: Positive Trust Anchors: Jul 2 08:25:15.574761 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 08:25:15.574793 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 08:25:15.575373 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 08:25:15.576362 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 08:25:15.582812 systemd-networkd[1384]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 08:25:15.582855 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jul 2 08:25:15.583766 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. Jul 2 08:25:15.584593 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 08:25:15.586210 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 08:25:15.586225 systemd[1]: Reached target network.target - Network. Jul 2 08:25:15.586256 systemd-timesyncd[1385]: Initial clock synchronization to Tue 2024-07-02 08:25:15.802916 UTC. Jul 2 08:25:15.586886 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 08:25:15.588506 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:25:15.594732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 08:25:15.620803 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 08:25:15.621915 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 08:25:15.622703 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 08:25:15.623536 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 08:25:15.624428 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 08:25:15.625456 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 08:25:15.626392 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 08:25:15.627298 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 08:25:15.628178 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 08:25:15.628211 systemd[1]: Reached target paths.target - Path Units. Jul 2 08:25:15.628835 systemd[1]: Reached target timers.target - Timer Units. Jul 2 08:25:15.630189 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 08:25:15.632152 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 08:25:15.640829 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 08:25:15.642771 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 08:25:15.643990 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 08:25:15.644841 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 08:25:15.645509 systemd[1]: Reached target basic.target - Basic System. Jul 2 08:25:15.646220 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:25:15.646255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 08:25:15.647147 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 08:25:15.648814 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 08:25:15.651848 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 08:25:15.652841 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 08:25:15.654513 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 08:25:15.655263 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 08:25:15.656946 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 08:25:15.661924 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 08:25:15.662535 jq[1418]: false Jul 2 08:25:15.663781 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 08:25:15.670769 extend-filesystems[1419]: Found loop3 Jul 2 08:25:15.670769 extend-filesystems[1419]: Found loop4 Jul 2 08:25:15.670769 extend-filesystems[1419]: Found loop5 Jul 2 08:25:15.670769 extend-filesystems[1419]: Found vda Jul 2 08:25:15.670769 extend-filesystems[1419]: Found vda1 Jul 2 08:25:15.670769 extend-filesystems[1419]: Found vda2 Jul 2 08:25:15.670769 extend-filesystems[1419]: Found vda3 Jul 2 08:25:15.676300 extend-filesystems[1419]: Found usr Jul 2 08:25:15.676300 extend-filesystems[1419]: Found vda4 Jul 2 08:25:15.676300 extend-filesystems[1419]: Found vda6 Jul 2 08:25:15.676300 extend-filesystems[1419]: Found vda7 Jul 2 08:25:15.676300 extend-filesystems[1419]: Found vda9 Jul 2 08:25:15.676300 extend-filesystems[1419]: Checking size of /dev/vda9 Jul 2 08:25:15.671156 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 08:25:15.678068 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 08:25:15.684802 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 08:25:15.685206 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 08:25:15.686203 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 08:25:15.687361 dbus-daemon[1417]: [system] SELinux support is enabled Jul 2 08:25:15.689549 extend-filesystems[1419]: Resized partition /dev/vda9 Jul 2 08:25:15.691889 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 08:25:15.695619 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 08:25:15.698593 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 08:25:15.701784 extend-filesystems[1439]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 08:25:15.702300 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 08:25:15.702441 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 08:25:15.702674 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 08:25:15.703099 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 08:25:15.705468 jq[1440]: true Jul 2 08:25:15.706899 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 08:25:15.707265 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 08:25:15.710775 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 08:25:15.720750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1356) Jul 2 08:25:15.729450 (ntainerd)[1450]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 08:25:15.729951 jq[1444]: true Jul 2 08:25:15.746774 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 08:25:15.746808 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 08:25:15.747000 systemd-logind[1428]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 08:25:15.749645 systemd-logind[1428]: New seat seat0. Jul 2 08:25:15.750337 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 08:25:15.750361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 08:25:15.751308 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 08:25:15.760806 tar[1442]: linux-arm64/helm Jul 2 08:25:15.780224 update_engine[1436]: I0702 08:25:15.780021 1436 main.cc:92] Flatcar Update Engine starting Jul 2 08:25:15.781985 systemd[1]: Started update-engine.service - Update Engine. Jul 2 08:25:15.788762 update_engine[1436]: I0702 08:25:15.782038 1436 update_check_scheduler.cc:74] Next update check in 4m7s Jul 2 08:25:15.794646 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 08:25:15.796732 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 08:25:15.806069 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 08:25:15.806069 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 08:25:15.806069 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 08:25:15.811336 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Jul 2 08:25:15.810530 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 08:25:15.811746 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 08:25:15.818990 bash[1470]: Updated "/home/core/.ssh/authorized_keys" Jul 2 08:25:15.822353 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 08:25:15.824068 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 08:25:15.849477 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 08:25:15.950183 containerd[1450]: time="2024-07-02T08:25:15.950085440Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 08:25:15.974679 containerd[1450]: time="2024-07-02T08:25:15.974635280Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 08:25:15.974960 containerd[1450]: time="2024-07-02T08:25:15.974744640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:25:15.976031 containerd[1450]: time="2024-07-02T08:25:15.975997040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:25:15.976206 containerd[1450]: time="2024-07-02T08:25:15.976090960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:25:15.976405 containerd[1450]: time="2024-07-02T08:25:15.976381800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:25:15.976466 containerd[1450]: time="2024-07-02T08:25:15.976453080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 08:25:15.976595 containerd[1450]: time="2024-07-02T08:25:15.976574320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 08:25:15.977024 containerd[1450]: time="2024-07-02T08:25:15.976691440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:25:15.977024 containerd[1450]: time="2024-07-02T08:25:15.976727200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 08:25:15.977024 containerd[1450]: time="2024-07-02T08:25:15.976792200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:25:15.977024 containerd[1450]: time="2024-07-02T08:25:15.976967040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 08:25:15.977024 containerd[1450]: time="2024-07-02T08:25:15.976986520Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 08:25:15.977024 containerd[1450]: time="2024-07-02T08:25:15.976996160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 08:25:15.977300 containerd[1450]: time="2024-07-02T08:25:15.977277840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 08:25:15.977355 containerd[1450]: time="2024-07-02T08:25:15.977342600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 08:25:15.977460 containerd[1450]: time="2024-07-02T08:25:15.977441480Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 08:25:15.977524 containerd[1450]: time="2024-07-02T08:25:15.977510360Z" level=info msg="metadata content store policy set" policy=shared Jul 2 08:25:15.980359 containerd[1450]: time="2024-07-02T08:25:15.980332800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 08:25:15.980447 containerd[1450]: time="2024-07-02T08:25:15.980433120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 08:25:15.980499 containerd[1450]: time="2024-07-02T08:25:15.980486160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 08:25:15.980569 containerd[1450]: time="2024-07-02T08:25:15.980555960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 08:25:15.980670 containerd[1450]: time="2024-07-02T08:25:15.980656200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 08:25:15.980767 containerd[1450]: time="2024-07-02T08:25:15.980751840Z" level=info msg="NRI interface is disabled by configuration." Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.980817520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.980935520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.980958600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.980971720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.980984120Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.980997360Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981012360Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981024480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981036000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981048680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981060800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981072520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981084400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 08:25:15.981234 containerd[1450]: time="2024-07-02T08:25:15.981186600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 08:25:15.981729 containerd[1450]: time="2024-07-02T08:25:15.981692480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 08:25:15.981818 containerd[1450]: time="2024-07-02T08:25:15.981802920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.981878 containerd[1450]: time="2024-07-02T08:25:15.981866040Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 08:25:15.981938 containerd[1450]: time="2024-07-02T08:25:15.981926040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 08:25:15.982108 containerd[1450]: time="2024-07-02T08:25:15.982094400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.982242 containerd[1450]: time="2024-07-02T08:25:15.982225120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.982301 containerd[1450]: time="2024-07-02T08:25:15.982288600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.982352 containerd[1450]: time="2024-07-02T08:25:15.982339560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.982415 containerd[1450]: time="2024-07-02T08:25:15.982402040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.982467 containerd[1450]: time="2024-07-02T08:25:15.982455480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.982516 containerd[1450]: time="2024-07-02T08:25:15.982504800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982561320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982580960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982729680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982750560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982762480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982775400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982787520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982801880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982813360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983729 containerd[1450]: time="2024-07-02T08:25:15.982828840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 08:25:15.983940 containerd[1450]: time="2024-07-02T08:25:15.983186000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 08:25:15.983940 containerd[1450]: time="2024-07-02T08:25:15.983242480Z" level=info msg="Connect containerd service" Jul 2 08:25:15.983940 containerd[1450]: time="2024-07-02T08:25:15.983268160Z" level=info msg="using legacy CRI server" Jul 2 08:25:15.983940 containerd[1450]: time="2024-07-02T08:25:15.983274560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 08:25:15.983940 containerd[1450]: time="2024-07-02T08:25:15.983404480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 08:25:15.984528 containerd[1450]: time="2024-07-02T08:25:15.984496840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:25:15.984637 containerd[1450]: time="2024-07-02T08:25:15.984622320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 08:25:15.984767 containerd[1450]: time="2024-07-02T08:25:15.984749120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 08:25:15.984839 containerd[1450]: time="2024-07-02T08:25:15.984826520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 08:25:15.984930 containerd[1450]: time="2024-07-02T08:25:15.984913880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 08:25:15.985092 containerd[1450]: time="2024-07-02T08:25:15.984718640Z" level=info msg="Start subscribing containerd event" Jul 2 08:25:15.985150 containerd[1450]: time="2024-07-02T08:25:15.985105480Z" level=info msg="Start recovering state" Jul 2 08:25:15.985271 containerd[1450]: time="2024-07-02T08:25:15.985186160Z" level=info msg="Start event monitor" Jul 2 08:25:15.985271 containerd[1450]: time="2024-07-02T08:25:15.985203880Z" level=info msg="Start snapshots syncer" Jul 2 08:25:15.985271 containerd[1450]: time="2024-07-02T08:25:15.985214440Z" level=info msg="Start cni network conf syncer for default" Jul 2 08:25:15.985271 containerd[1450]: time="2024-07-02T08:25:15.985221520Z" level=info msg="Start streaming server" Jul 2 08:25:15.985790 containerd[1450]: time="2024-07-02T08:25:15.985765080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 08:25:15.985906 containerd[1450]: time="2024-07-02T08:25:15.985893160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 08:25:15.986686 containerd[1450]: time="2024-07-02T08:25:15.986657040Z" level=info msg="containerd successfully booted in 0.037374s" Jul 2 08:25:15.986738 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 08:25:16.115503 tar[1442]: linux-arm64/LICENSE Jul 2 08:25:16.115686 tar[1442]: linux-arm64/README.md Jul 2 08:25:16.132048 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 08:25:17.095533 sshd_keygen[1438]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 08:25:17.115270 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 08:25:17.131130 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 08:25:17.137573 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 08:25:17.137823 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 08:25:17.143109 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 08:25:17.156348 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 08:25:17.158876 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 08:25:17.160934 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 08:25:17.162021 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 08:25:17.292833 systemd-networkd[1384]: eth0: Gained IPv6LL Jul 2 08:25:17.295442 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 08:25:17.296881 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 08:25:17.308048 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 08:25:17.310396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:17.312304 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 08:25:17.327099 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 08:25:17.328511 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 08:25:17.330302 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 08:25:17.333809 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 08:25:17.826281 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:17.827495 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 08:25:17.829765 (kubelet)[1529]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:25:17.833321 systemd[1]: Startup finished in 541ms (kernel) + 4.335s (initrd) + 3.826s (userspace) = 8.703s. Jul 2 08:25:18.298260 kubelet[1529]: E0702 08:25:18.298153 1529 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:25:18.301013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:25:18.301166 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:25:22.870314 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 08:25:22.871399 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:44208.service - OpenSSH per-connection server daemon (10.0.0.1:44208). Jul 2 08:25:22.926658 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 44208 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:25:22.930438 sshd[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:22.946516 systemd-logind[1428]: New session 1 of user core. Jul 2 08:25:22.947546 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 08:25:22.954008 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 08:25:22.962902 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 08:25:22.967559 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 08:25:22.970966 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:23.047648 systemd[1548]: Queued start job for default target default.target. Jul 2 08:25:23.057584 systemd[1548]: Created slice app.slice - User Application Slice. Jul 2 08:25:23.057615 systemd[1548]: Reached target paths.target - Paths. Jul 2 08:25:23.057626 systemd[1548]: Reached target timers.target - Timers. Jul 2 08:25:23.058842 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 08:25:23.074389 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 08:25:23.074502 systemd[1548]: Reached target sockets.target - Sockets. Jul 2 08:25:23.074519 systemd[1548]: Reached target basic.target - Basic System. Jul 2 08:25:23.074553 systemd[1548]: Reached target default.target - Main User Target. Jul 2 08:25:23.074578 systemd[1548]: Startup finished in 98ms. Jul 2 08:25:23.074785 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 08:25:23.076218 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 08:25:23.143482 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:44218.service - OpenSSH per-connection server daemon (10.0.0.1:44218). Jul 2 08:25:23.182354 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 44218 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:25:23.184005 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:23.187814 systemd-logind[1428]: New session 2 of user core. Jul 2 08:25:23.201888 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 08:25:23.259138 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 2 08:25:23.272001 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:44218.service: Deactivated successfully. Jul 2 08:25:23.273309 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 08:25:23.278434 systemd-logind[1428]: Session 2 logged out. Waiting for processes to exit. Jul 2 08:25:23.279865 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:44224.service - OpenSSH per-connection server daemon (10.0.0.1:44224). Jul 2 08:25:23.280583 systemd-logind[1428]: Removed session 2. Jul 2 08:25:23.322894 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 44224 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:25:23.324088 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:23.327826 systemd-logind[1428]: New session 3 of user core. Jul 2 08:25:23.338920 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 08:25:23.387408 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 2 08:25:23.401255 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:44224.service: Deactivated successfully. Jul 2 08:25:23.405332 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 08:25:23.410591 systemd-logind[1428]: Session 3 logged out. Waiting for processes to exit. Jul 2 08:25:23.419010 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:44234.service - OpenSSH per-connection server daemon (10.0.0.1:44234). Jul 2 08:25:23.419976 systemd-logind[1428]: Removed session 3. Jul 2 08:25:23.453998 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 44234 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:25:23.455119 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:23.459557 systemd-logind[1428]: New session 4 of user core. Jul 2 08:25:23.472428 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 08:25:23.530167 sshd[1573]: pam_unix(sshd:session): session closed for user core Jul 2 08:25:23.542081 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:44234.service: Deactivated successfully. Jul 2 08:25:23.544404 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 08:25:23.545682 systemd-logind[1428]: Session 4 logged out. Waiting for processes to exit. Jul 2 08:25:23.546839 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:44250.service - OpenSSH per-connection server daemon (10.0.0.1:44250). Jul 2 08:25:23.547631 systemd-logind[1428]: Removed session 4. Jul 2 08:25:23.607507 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 44250 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:25:23.609040 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:23.613435 systemd-logind[1428]: New session 5 of user core. Jul 2 08:25:23.630972 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 08:25:23.691000 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 08:25:23.691249 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:25:23.703509 sudo[1583]: pam_unix(sudo:session): session closed for user root Jul 2 08:25:23.705186 sshd[1580]: pam_unix(sshd:session): session closed for user core Jul 2 08:25:23.725873 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:44250.service: Deactivated successfully. Jul 2 08:25:23.727488 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 08:25:23.732266 systemd-logind[1428]: Session 5 logged out. Waiting for processes to exit. Jul 2 08:25:23.742047 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:44264.service - OpenSSH per-connection server daemon (10.0.0.1:44264). Jul 2 08:25:23.743326 systemd-logind[1428]: Removed session 5. Jul 2 08:25:23.773004 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 44264 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:25:23.773524 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:23.779552 systemd-logind[1428]: New session 6 of user core. Jul 2 08:25:23.792899 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 08:25:23.849107 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 08:25:23.851800 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:25:23.855418 sudo[1592]: pam_unix(sudo:session): session closed for user root Jul 2 08:25:23.861628 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 08:25:23.861950 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:25:23.885974 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 08:25:23.887566 auditctl[1595]: No rules Jul 2 08:25:23.888027 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 08:25:23.888185 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 08:25:23.894806 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 08:25:23.917427 augenrules[1613]: No rules Jul 2 08:25:23.918680 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 08:25:23.920400 sudo[1591]: pam_unix(sudo:session): session closed for user root Jul 2 08:25:23.922570 sshd[1588]: pam_unix(sshd:session): session closed for user core Jul 2 08:25:23.930227 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:44264.service: Deactivated successfully. Jul 2 08:25:23.931758 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 08:25:23.933871 systemd-logind[1428]: Session 6 logged out. Waiting for processes to exit. Jul 2 08:25:23.934321 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:44280.service - OpenSSH per-connection server daemon (10.0.0.1:44280). Jul 2 08:25:23.935531 systemd-logind[1428]: Removed session 6. Jul 2 08:25:23.976100 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 44280 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:25:23.977281 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:25:23.981065 systemd-logind[1428]: New session 7 of user core. Jul 2 08:25:23.990940 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 08:25:24.041607 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 08:25:24.041897 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 08:25:24.172852 (dockerd)[1635]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 08:25:24.176297 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 08:25:24.422873 dockerd[1635]: time="2024-07-02T08:25:24.422206412Z" level=info msg="Starting up" Jul 2 08:25:24.516745 dockerd[1635]: time="2024-07-02T08:25:24.516503214Z" level=info msg="Loading containers: start." Jul 2 08:25:24.604768 kernel: Initializing XFRM netlink socket Jul 2 08:25:24.674985 systemd-networkd[1384]: docker0: Link UP Jul 2 08:25:24.686772 dockerd[1635]: time="2024-07-02T08:25:24.686707094Z" level=info msg="Loading containers: done." Jul 2 08:25:24.757332 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1966265184-merged.mount: Deactivated successfully. Jul 2 08:25:24.759824 dockerd[1635]: time="2024-07-02T08:25:24.759768238Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 08:25:24.760012 dockerd[1635]: time="2024-07-02T08:25:24.759991821Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 08:25:24.760132 dockerd[1635]: time="2024-07-02T08:25:24.760116814Z" level=info msg="Daemon has completed initialization" Jul 2 08:25:24.790181 dockerd[1635]: time="2024-07-02T08:25:24.790124593Z" level=info msg="API listen on /run/docker.sock" Jul 2 08:25:24.790342 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 08:25:25.336025 containerd[1450]: time="2024-07-02T08:25:25.335987071Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 08:25:25.985365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321020951.mount: Deactivated successfully. Jul 2 08:25:26.995222 containerd[1450]: time="2024-07-02T08:25:26.995173473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:26.996666 containerd[1450]: time="2024-07-02T08:25:26.996568548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940432" Jul 2 08:25:26.999022 containerd[1450]: time="2024-07-02T08:25:26.997600814Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:27.000354 containerd[1450]: time="2024-07-02T08:25:27.000320424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:27.003051 containerd[1450]: time="2024-07-02T08:25:27.003000986Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 1.66695553s" Jul 2 08:25:27.003124 containerd[1450]: time="2024-07-02T08:25:27.003055524Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 2 08:25:27.022200 containerd[1450]: time="2024-07-02T08:25:27.022166752Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 08:25:28.460698 containerd[1450]: time="2024-07-02T08:25:28.460637022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:28.461269 containerd[1450]: time="2024-07-02T08:25:28.461236195Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881373" Jul 2 08:25:28.462067 containerd[1450]: time="2024-07-02T08:25:28.462037987Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:28.465321 containerd[1450]: time="2024-07-02T08:25:28.465286903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:28.466993 containerd[1450]: time="2024-07-02T08:25:28.466946109Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 1.444605971s" Jul 2 08:25:28.467033 containerd[1450]: time="2024-07-02T08:25:28.466994693Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 2 08:25:28.486814 containerd[1450]: time="2024-07-02T08:25:28.486552095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 08:25:28.551610 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 08:25:28.561880 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:28.649767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:28.653108 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:25:28.697262 kubelet[1855]: E0702 08:25:28.697210 1855 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:25:28.701389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:25:28.701542 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:25:29.500343 containerd[1450]: time="2024-07-02T08:25:29.500282597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:29.500790 containerd[1450]: time="2024-07-02T08:25:29.500647451Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155690" Jul 2 08:25:29.501617 containerd[1450]: time="2024-07-02T08:25:29.501588524Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:29.504761 containerd[1450]: time="2024-07-02T08:25:29.504727764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:29.505694 containerd[1450]: time="2024-07-02T08:25:29.505656097Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 1.019062865s" Jul 2 08:25:29.505743 containerd[1450]: time="2024-07-02T08:25:29.505691224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 2 08:25:29.525549 containerd[1450]: time="2024-07-02T08:25:29.525316219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 08:25:31.439467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823633707.mount: Deactivated successfully. Jul 2 08:25:31.631606 containerd[1450]: time="2024-07-02T08:25:31.631560901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:31.632542 containerd[1450]: time="2024-07-02T08:25:31.632508312Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634094" Jul 2 08:25:31.633372 containerd[1450]: time="2024-07-02T08:25:31.633325248Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:31.635028 containerd[1450]: time="2024-07-02T08:25:31.634985015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:31.635716 containerd[1450]: time="2024-07-02T08:25:31.635663366Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 2.110308734s" Jul 2 08:25:31.635716 containerd[1450]: time="2024-07-02T08:25:31.635699497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 2 08:25:31.654789 containerd[1450]: time="2024-07-02T08:25:31.654570324Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 08:25:32.279331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592482532.mount: Deactivated successfully. Jul 2 08:25:33.089782 containerd[1450]: time="2024-07-02T08:25:33.089736147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:33.091720 containerd[1450]: time="2024-07-02T08:25:33.091685147Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Jul 2 08:25:33.092876 containerd[1450]: time="2024-07-02T08:25:33.092836722Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:33.095293 containerd[1450]: time="2024-07-02T08:25:33.095263015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:33.096545 containerd[1450]: time="2024-07-02T08:25:33.096513947Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.44190758s" Jul 2 08:25:33.096591 containerd[1450]: time="2024-07-02T08:25:33.096551171Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 08:25:33.116971 containerd[1450]: time="2024-07-02T08:25:33.116737843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 08:25:33.600731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530268530.mount: Deactivated successfully. Jul 2 08:25:33.604350 containerd[1450]: time="2024-07-02T08:25:33.604301305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:33.604914 containerd[1450]: time="2024-07-02T08:25:33.604881926Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Jul 2 08:25:33.606464 containerd[1450]: time="2024-07-02T08:25:33.606431572Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:33.608670 containerd[1450]: time="2024-07-02T08:25:33.608613022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:33.609671 containerd[1450]: time="2024-07-02T08:25:33.609452365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 492.674971ms" Jul 2 08:25:33.609671 containerd[1450]: time="2024-07-02T08:25:33.609485898Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 08:25:33.628990 containerd[1450]: time="2024-07-02T08:25:33.628956051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 08:25:34.252975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount48596216.mount: Deactivated successfully. Jul 2 08:25:36.502804 containerd[1450]: time="2024-07-02T08:25:36.502752831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:36.503851 containerd[1450]: time="2024-07-02T08:25:36.503824036Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Jul 2 08:25:36.504800 containerd[1450]: time="2024-07-02T08:25:36.504347897Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:36.508318 containerd[1450]: time="2024-07-02T08:25:36.508278374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:25:36.509044 containerd[1450]: time="2024-07-02T08:25:36.509011026Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.880016756s" Jul 2 08:25:36.509099 containerd[1450]: time="2024-07-02T08:25:36.509042925Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 2 08:25:38.902538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 08:25:38.915890 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:39.014015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:39.017408 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 08:25:39.054696 kubelet[2076]: E0702 08:25:39.054646 2076 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 08:25:39.057511 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 08:25:39.057663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 08:25:40.557501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:40.568917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:40.583054 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-7.scope)... Jul 2 08:25:40.583074 systemd[1]: Reloading... Jul 2 08:25:40.650743 zram_generator::config[2128]: No configuration found. Jul 2 08:25:40.766176 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:25:40.820573 systemd[1]: Reloading finished in 237 ms. Jul 2 08:25:40.858257 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:40.861067 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:25:40.861247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:40.862632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:40.959376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:40.963820 (kubelet)[2175]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:25:41.000382 kubelet[2175]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:25:41.000382 kubelet[2175]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:25:41.000382 kubelet[2175]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:25:41.003729 kubelet[2175]: I0702 08:25:41.001908 2175 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:25:42.086731 kubelet[2175]: I0702 08:25:42.086674 2175 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:25:42.086731 kubelet[2175]: I0702 08:25:42.086714 2175 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:25:42.087119 kubelet[2175]: I0702 08:25:42.087052 2175 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:25:42.102493 kubelet[2175]: E0702 08:25:42.102460 2175 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.104085 kubelet[2175]: I0702 08:25:42.104065 2175 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:25:42.113256 kubelet[2175]: I0702 08:25:42.113217 2175 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:25:42.114480 kubelet[2175]: I0702 08:25:42.114443 2175 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:25:42.114651 kubelet[2175]: I0702 08:25:42.114488 2175 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:25:42.114755 kubelet[2175]: I0702 08:25:42.114723 2175 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:25:42.114755 kubelet[2175]: I0702 08:25:42.114732 2175 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:25:42.115011 kubelet[2175]: I0702 08:25:42.114982 2175 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:25:42.117949 kubelet[2175]: I0702 08:25:42.117573 2175 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:25:42.117949 kubelet[2175]: I0702 08:25:42.117595 2175 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:25:42.117949 kubelet[2175]: I0702 08:25:42.117813 2175 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:25:42.117949 kubelet[2175]: I0702 08:25:42.117911 2175 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:25:42.119109 kubelet[2175]: I0702 08:25:42.119065 2175 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:25:42.119497 kubelet[2175]: I0702 08:25:42.119466 2175 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:25:42.119590 kubelet[2175]: W0702 08:25:42.119576 2175 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 08:25:42.120095 kubelet[2175]: W0702 08:25:42.120037 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.120139 kubelet[2175]: E0702 08:25:42.120099 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.120409 kubelet[2175]: I0702 08:25:42.120382 2175 server.go:1264] "Started kubelet" Jul 2 08:25:42.120633 kubelet[2175]: W0702 08:25:42.120598 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.120733 kubelet[2175]: E0702 08:25:42.120721 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.120788 kubelet[2175]: I0702 08:25:42.120667 2175 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:25:42.121034 kubelet[2175]: I0702 08:25:42.120981 2175 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:25:42.121296 kubelet[2175]: I0702 08:25:42.121258 2175 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:25:42.122741 kubelet[2175]: I0702 08:25:42.122008 2175 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:25:42.123309 kubelet[2175]: I0702 08:25:42.123277 2175 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:25:42.124166 kubelet[2175]: I0702 08:25:42.124130 2175 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:25:42.124300 kubelet[2175]: I0702 08:25:42.124284 2175 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:25:42.124658 kubelet[2175]: I0702 08:25:42.124623 2175 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:25:42.126471 kubelet[2175]: E0702 08:25:42.126441 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Jul 2 08:25:42.126568 kubelet[2175]: W0702 08:25:42.126531 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.126617 kubelet[2175]: E0702 08:25:42.126573 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.128070 kubelet[2175]: I0702 08:25:42.128033 2175 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:25:42.128332 kubelet[2175]: I0702 08:25:42.128137 2175 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:25:42.128591 kubelet[2175]: E0702 08:25:42.123525 2175 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de57e3742c4e23 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 08:25:42.120361507 +0000 UTC m=+1.153536685,LastTimestamp:2024-07-02 08:25:42.120361507 +0000 UTC m=+1.153536685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 08:25:42.129487 kubelet[2175]: E0702 08:25:42.129444 2175 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:25:42.129560 kubelet[2175]: I0702 08:25:42.129520 2175 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:25:42.141144 kubelet[2175]: I0702 08:25:42.141103 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:25:42.141990 kubelet[2175]: I0702 08:25:42.141968 2175 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:25:42.141990 kubelet[2175]: I0702 08:25:42.141986 2175 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:25:42.142565 kubelet[2175]: I0702 08:25:42.142004 2175 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:25:42.142565 kubelet[2175]: I0702 08:25:42.142115 2175 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:25:42.142565 kubelet[2175]: I0702 08:25:42.142211 2175 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:25:42.142565 kubelet[2175]: I0702 08:25:42.142230 2175 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:25:42.142565 kubelet[2175]: E0702 08:25:42.142269 2175 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:25:42.142906 kubelet[2175]: W0702 08:25:42.142755 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.142906 kubelet[2175]: E0702 08:25:42.142798 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:42.144075 kubelet[2175]: I0702 08:25:42.144047 2175 policy_none.go:49] "None policy: Start" Jul 2 08:25:42.144831 kubelet[2175]: I0702 08:25:42.144813 2175 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:25:42.144895 kubelet[2175]: I0702 08:25:42.144837 2175 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:25:42.150346 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 08:25:42.167855 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 08:25:42.170603 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 08:25:42.183472 kubelet[2175]: I0702 08:25:42.183431 2175 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:25:42.183845 kubelet[2175]: I0702 08:25:42.183622 2175 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:25:42.183845 kubelet[2175]: I0702 08:25:42.183766 2175 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:25:42.185393 kubelet[2175]: E0702 08:25:42.185373 2175 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 08:25:42.224890 kubelet[2175]: I0702 08:25:42.224854 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:25:42.225241 kubelet[2175]: E0702 08:25:42.225169 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Jul 2 08:25:42.242657 kubelet[2175]: I0702 08:25:42.242571 2175 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:25:42.243661 kubelet[2175]: I0702 08:25:42.243639 2175 topology_manager.go:215] "Topology Admit Handler" podUID="ecfc2cd068e3630a71320fa7e48fe174" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:25:42.245429 kubelet[2175]: I0702 08:25:42.244873 2175 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:25:42.249975 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 08:25:42.274277 systemd[1]: Created slice kubepods-burstable-podecfc2cd068e3630a71320fa7e48fe174.slice - libcontainer container kubepods-burstable-podecfc2cd068e3630a71320fa7e48fe174.slice. Jul 2 08:25:42.286262 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 08:25:42.325538 kubelet[2175]: I0702 08:25:42.325510 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecfc2cd068e3630a71320fa7e48fe174-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecfc2cd068e3630a71320fa7e48fe174\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:25:42.325538 kubelet[2175]: I0702 08:25:42.325542 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:42.325662 kubelet[2175]: I0702 08:25:42.325562 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:42.325662 kubelet[2175]: I0702 08:25:42.325580 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecfc2cd068e3630a71320fa7e48fe174-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfc2cd068e3630a71320fa7e48fe174\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:25:42.325662 kubelet[2175]: I0702 08:25:42.325595 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecfc2cd068e3630a71320fa7e48fe174-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfc2cd068e3630a71320fa7e48fe174\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:25:42.325662 kubelet[2175]: I0702 08:25:42.325608 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:42.325662 kubelet[2175]: I0702 08:25:42.325622 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:42.325789 kubelet[2175]: I0702 08:25:42.325636 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:42.325789 kubelet[2175]: I0702 08:25:42.325650 2175 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:25:42.327765 kubelet[2175]: E0702 08:25:42.327729 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Jul 2 08:25:42.428117 kubelet[2175]: I0702 08:25:42.427866 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:25:42.429719 kubelet[2175]: E0702 08:25:42.428230 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Jul 2 08:25:42.572168 kubelet[2175]: E0702 08:25:42.572116 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:42.572932 containerd[1450]: time="2024-07-02T08:25:42.572882813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 08:25:42.585411 kubelet[2175]: E0702 08:25:42.585144 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:42.586651 containerd[1450]: time="2024-07-02T08:25:42.586581332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecfc2cd068e3630a71320fa7e48fe174,Namespace:kube-system,Attempt:0,}" Jul 2 08:25:42.589169 kubelet[2175]: E0702 08:25:42.589053 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:42.589854 containerd[1450]: time="2024-07-02T08:25:42.589571607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 08:25:42.728616 kubelet[2175]: E0702 08:25:42.728513 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Jul 2 08:25:42.829923 kubelet[2175]: I0702 08:25:42.829876 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:25:42.830388 kubelet[2175]: E0702 08:25:42.830324 2175 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Jul 2 08:25:43.017749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3878344710.mount: Deactivated successfully. Jul 2 08:25:43.026361 containerd[1450]: time="2024-07-02T08:25:43.025962483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:25:43.027721 containerd[1450]: time="2024-07-02T08:25:43.027679626Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 2 08:25:43.028752 containerd[1450]: time="2024-07-02T08:25:43.028545984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:25:43.030067 containerd[1450]: time="2024-07-02T08:25:43.030028115Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:25:43.030997 containerd[1450]: time="2024-07-02T08:25:43.030949473Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:25:43.032247 containerd[1450]: time="2024-07-02T08:25:43.032205117Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:25:43.033266 containerd[1450]: time="2024-07-02T08:25:43.033235915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 08:25:43.036638 containerd[1450]: time="2024-07-02T08:25:43.036600311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 08:25:43.037768 containerd[1450]: time="2024-07-02T08:25:43.037538081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 447.882365ms" Jul 2 08:25:43.038328 containerd[1450]: time="2024-07-02T08:25:43.038289354Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 465.289004ms" Jul 2 08:25:43.043118 containerd[1450]: time="2024-07-02T08:25:43.042944820Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 456.266967ms" Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.216985738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.217053307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.217067998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.217077885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.217112631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.217184284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.217200976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:25:43.217315 containerd[1450]: time="2024-07-02T08:25:43.217212304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:25:43.218049 containerd[1450]: time="2024-07-02T08:25:43.217968501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:25:43.218049 containerd[1450]: time="2024-07-02T08:25:43.218024022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:25:43.218049 containerd[1450]: time="2024-07-02T08:25:43.218037311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:25:43.218049 containerd[1450]: time="2024-07-02T08:25:43.218048640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:25:43.247923 systemd[1]: Started cri-containerd-07a4e25d19c4ed31f677282a3c86d16d4c7e8499f9ce49c3d14bbdce88be8c4c.scope - libcontainer container 07a4e25d19c4ed31f677282a3c86d16d4c7e8499f9ce49c3d14bbdce88be8c4c. Jul 2 08:25:43.248995 systemd[1]: Started cri-containerd-a474f6dc28590af62b92537125ab22bca84c4457efa6abb8e40ea732b504e58d.scope - libcontainer container a474f6dc28590af62b92537125ab22bca84c4457efa6abb8e40ea732b504e58d. Jul 2 08:25:43.250749 systemd[1]: Started cri-containerd-e50e46c12b6840406fc6ffd15b52d59b6b28699dc549e1f238eb4564ede3143d.scope - libcontainer container e50e46c12b6840406fc6ffd15b52d59b6b28699dc549e1f238eb4564ede3143d. Jul 2 08:25:43.281032 containerd[1450]: time="2024-07-02T08:25:43.280862465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ecfc2cd068e3630a71320fa7e48fe174,Namespace:kube-system,Attempt:0,} returns sandbox id \"07a4e25d19c4ed31f677282a3c86d16d4c7e8499f9ce49c3d14bbdce88be8c4c\"" Jul 2 08:25:43.282203 kubelet[2175]: E0702 08:25:43.281961 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:43.284694 containerd[1450]: time="2024-07-02T08:25:43.284350351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"a474f6dc28590af62b92537125ab22bca84c4457efa6abb8e40ea732b504e58d\"" Jul 2 08:25:43.284789 kubelet[2175]: W0702 08:25:43.284419 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:43.284789 kubelet[2175]: E0702 08:25:43.284452 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:43.285422 kubelet[2175]: E0702 08:25:43.285253 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:43.286792 containerd[1450]: time="2024-07-02T08:25:43.286752479Z" level=info msg="CreateContainer within sandbox \"07a4e25d19c4ed31f677282a3c86d16d4c7e8499f9ce49c3d14bbdce88be8c4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 08:25:43.287881 containerd[1450]: time="2024-07-02T08:25:43.287702618Z" level=info msg="CreateContainer within sandbox \"a474f6dc28590af62b92537125ab22bca84c4457efa6abb8e40ea732b504e58d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 08:25:43.293972 containerd[1450]: time="2024-07-02T08:25:43.293942851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e50e46c12b6840406fc6ffd15b52d59b6b28699dc549e1f238eb4564ede3143d\"" Jul 2 08:25:43.294517 kubelet[2175]: E0702 08:25:43.294497 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:43.296450 containerd[1450]: time="2024-07-02T08:25:43.296360150Z" level=info msg="CreateContainer within sandbox \"e50e46c12b6840406fc6ffd15b52d59b6b28699dc549e1f238eb4564ede3143d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 08:25:43.312747 containerd[1450]: time="2024-07-02T08:25:43.312682801Z" level=info msg="CreateContainer within sandbox \"e50e46c12b6840406fc6ffd15b52d59b6b28699dc549e1f238eb4564ede3143d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0df04efefbfe2b80172579da08e1b6df72d4ebd8dbcc2118f7b98962673a8ed4\"" Jul 2 08:25:43.314161 containerd[1450]: time="2024-07-02T08:25:43.314119899Z" level=info msg="CreateContainer within sandbox \"a474f6dc28590af62b92537125ab22bca84c4457efa6abb8e40ea732b504e58d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"30a5f198e10d5d180fa88653b94c314517a5bc0d4d16478a184e8a2e7d643c7c\"" Jul 2 08:25:43.314360 containerd[1450]: time="2024-07-02T08:25:43.314318445Z" level=info msg="StartContainer for \"0df04efefbfe2b80172579da08e1b6df72d4ebd8dbcc2118f7b98962673a8ed4\"" Jul 2 08:25:43.314612 containerd[1450]: time="2024-07-02T08:25:43.314560984Z" level=info msg="StartContainer for \"30a5f198e10d5d180fa88653b94c314517a5bc0d4d16478a184e8a2e7d643c7c\"" Jul 2 08:25:43.317759 containerd[1450]: time="2024-07-02T08:25:43.316231413Z" level=info msg="CreateContainer within sandbox \"07a4e25d19c4ed31f677282a3c86d16d4c7e8499f9ce49c3d14bbdce88be8c4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"415041123f54a85a3c5e03c80190b7f095e8226276aed2fbac66f93ccce3b138\"" Jul 2 08:25:43.319278 containerd[1450]: time="2024-07-02T08:25:43.318200582Z" level=info msg="StartContainer for \"415041123f54a85a3c5e03c80190b7f095e8226276aed2fbac66f93ccce3b138\"" Jul 2 08:25:43.338906 systemd[1]: Started cri-containerd-30a5f198e10d5d180fa88653b94c314517a5bc0d4d16478a184e8a2e7d643c7c.scope - libcontainer container 30a5f198e10d5d180fa88653b94c314517a5bc0d4d16478a184e8a2e7d643c7c. Jul 2 08:25:43.342762 systemd[1]: Started cri-containerd-0df04efefbfe2b80172579da08e1b6df72d4ebd8dbcc2118f7b98962673a8ed4.scope - libcontainer container 0df04efefbfe2b80172579da08e1b6df72d4ebd8dbcc2118f7b98962673a8ed4. Jul 2 08:25:43.346117 systemd[1]: Started cri-containerd-415041123f54a85a3c5e03c80190b7f095e8226276aed2fbac66f93ccce3b138.scope - libcontainer container 415041123f54a85a3c5e03c80190b7f095e8226276aed2fbac66f93ccce3b138. Jul 2 08:25:43.381823 containerd[1450]: time="2024-07-02T08:25:43.381778810Z" level=info msg="StartContainer for \"30a5f198e10d5d180fa88653b94c314517a5bc0d4d16478a184e8a2e7d643c7c\" returns successfully" Jul 2 08:25:43.387971 containerd[1450]: time="2024-07-02T08:25:43.387881380Z" level=info msg="StartContainer for \"415041123f54a85a3c5e03c80190b7f095e8226276aed2fbac66f93ccce3b138\" returns successfully" Jul 2 08:25:43.402660 containerd[1450]: time="2024-07-02T08:25:43.402622789Z" level=info msg="StartContainer for \"0df04efefbfe2b80172579da08e1b6df72d4ebd8dbcc2118f7b98962673a8ed4\" returns successfully" Jul 2 08:25:43.415860 kubelet[2175]: W0702 08:25:43.415800 2175 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:43.416010 kubelet[2175]: E0702 08:25:43.415989 2175 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.93:6443: connect: connection refused Jul 2 08:25:43.529780 kubelet[2175]: E0702 08:25:43.529413 2175 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="1.6s" Jul 2 08:25:43.632106 kubelet[2175]: I0702 08:25:43.631992 2175 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:25:44.160724 kubelet[2175]: E0702 08:25:44.160629 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:44.162313 kubelet[2175]: E0702 08:25:44.162281 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:44.163419 kubelet[2175]: E0702 08:25:44.163395 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:45.096719 kubelet[2175]: I0702 08:25:45.096671 2175 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 08:25:45.104339 kubelet[2175]: E0702 08:25:45.104292 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:25:45.166822 kubelet[2175]: E0702 08:25:45.166778 2175 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:45.204682 kubelet[2175]: E0702 08:25:45.204630 2175 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:25:46.121110 kubelet[2175]: I0702 08:25:46.121073 2175 apiserver.go:52] "Watching apiserver" Jul 2 08:25:46.125035 kubelet[2175]: I0702 08:25:46.124998 2175 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:25:46.927759 systemd[1]: Reloading requested from client PID 2455 ('systemctl') (unit session-7.scope)... Jul 2 08:25:46.927776 systemd[1]: Reloading... Jul 2 08:25:46.996751 zram_generator::config[2495]: No configuration found. Jul 2 08:25:47.075327 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 08:25:47.143312 systemd[1]: Reloading finished in 215 ms. Jul 2 08:25:47.175536 kubelet[2175]: I0702 08:25:47.175474 2175 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:25:47.175939 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:47.193193 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 08:25:47.193807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:47.194030 systemd[1]: kubelet.service: Consumed 1.349s CPU time, 117.9M memory peak, 0B memory swap peak. Jul 2 08:25:47.204989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 08:25:47.308871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 08:25:47.314072 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 08:25:47.351392 kubelet[2534]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:25:47.351392 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 08:25:47.351392 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 08:25:47.351795 kubelet[2534]: I0702 08:25:47.351427 2534 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 08:25:47.356163 kubelet[2534]: I0702 08:25:47.356122 2534 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 08:25:47.356163 kubelet[2534]: I0702 08:25:47.356146 2534 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 08:25:47.356633 kubelet[2534]: I0702 08:25:47.356295 2534 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 08:25:47.357508 kubelet[2534]: I0702 08:25:47.357482 2534 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 08:25:47.358511 kubelet[2534]: I0702 08:25:47.358490 2534 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 08:25:47.364766 kubelet[2534]: I0702 08:25:47.364632 2534 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 08:25:47.364889 kubelet[2534]: I0702 08:25:47.364843 2534 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 08:25:47.365773 kubelet[2534]: I0702 08:25:47.364870 2534 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 08:25:47.365773 kubelet[2534]: I0702 08:25:47.365042 2534 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 08:25:47.365773 kubelet[2534]: I0702 08:25:47.365051 2534 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 08:25:47.365773 kubelet[2534]: I0702 08:25:47.365080 2534 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:25:47.365773 kubelet[2534]: I0702 08:25:47.365188 2534 kubelet.go:400] "Attempting to sync node with API server" Jul 2 08:25:47.365963 kubelet[2534]: I0702 08:25:47.365201 2534 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 08:25:47.365963 kubelet[2534]: I0702 08:25:47.365227 2534 kubelet.go:312] "Adding apiserver pod source" Jul 2 08:25:47.365963 kubelet[2534]: I0702 08:25:47.365242 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 08:25:47.367977 kubelet[2534]: I0702 08:25:47.366769 2534 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 08:25:47.367977 kubelet[2534]: I0702 08:25:47.366926 2534 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 08:25:47.367977 kubelet[2534]: I0702 08:25:47.367320 2534 server.go:1264] "Started kubelet" Jul 2 08:25:47.367977 kubelet[2534]: I0702 08:25:47.367393 2534 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 08:25:47.367977 kubelet[2534]: I0702 08:25:47.367556 2534 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 08:25:47.367977 kubelet[2534]: I0702 08:25:47.367772 2534 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 08:25:47.370746 kubelet[2534]: I0702 08:25:47.368252 2534 server.go:455] "Adding debug handlers to kubelet server" Jul 2 08:25:47.372078 kubelet[2534]: I0702 08:25:47.372035 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 08:25:47.375149 kubelet[2534]: E0702 08:25:47.375125 2534 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 08:25:47.375954 kubelet[2534]: I0702 08:25:47.375242 2534 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 08:25:47.375954 kubelet[2534]: I0702 08:25:47.375336 2534 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 08:25:47.375954 kubelet[2534]: I0702 08:25:47.375448 2534 reconciler.go:26] "Reconciler: start to sync state" Jul 2 08:25:47.385819 kubelet[2534]: E0702 08:25:47.384674 2534 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 08:25:47.385819 kubelet[2534]: I0702 08:25:47.384989 2534 factory.go:221] Registration of the systemd container factory successfully Jul 2 08:25:47.385819 kubelet[2534]: I0702 08:25:47.385129 2534 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 08:25:47.389321 kubelet[2534]: I0702 08:25:47.389289 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 08:25:47.390618 kubelet[2534]: I0702 08:25:47.390585 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 08:25:47.390729 kubelet[2534]: I0702 08:25:47.390698 2534 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 08:25:47.390787 kubelet[2534]: I0702 08:25:47.390778 2534 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 08:25:47.390874 kubelet[2534]: E0702 08:25:47.390857 2534 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 08:25:47.393108 kubelet[2534]: I0702 08:25:47.393075 2534 factory.go:221] Registration of the containerd container factory successfully Jul 2 08:25:47.446480 kubelet[2534]: I0702 08:25:47.446384 2534 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 08:25:47.446480 kubelet[2534]: I0702 08:25:47.446402 2534 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 08:25:47.446480 kubelet[2534]: I0702 08:25:47.446420 2534 state_mem.go:36] "Initialized new in-memory state store" Jul 2 08:25:47.446617 kubelet[2534]: I0702 08:25:47.446577 2534 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 08:25:47.446617 kubelet[2534]: I0702 08:25:47.446588 2534 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 08:25:47.446617 kubelet[2534]: I0702 08:25:47.446604 2534 policy_none.go:49] "None policy: Start" Jul 2 08:25:47.448229 kubelet[2534]: I0702 08:25:47.448199 2534 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 08:25:47.448229 kubelet[2534]: I0702 08:25:47.448222 2534 state_mem.go:35] "Initializing new in-memory state store" Jul 2 08:25:47.448355 kubelet[2534]: I0702 08:25:47.448343 2534 state_mem.go:75] "Updated machine memory state" Jul 2 08:25:47.452149 kubelet[2534]: I0702 08:25:47.452119 2534 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 08:25:47.452832 kubelet[2534]: I0702 08:25:47.452515 2534 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 08:25:47.455023 kubelet[2534]: I0702 08:25:47.454863 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 08:25:47.478395 kubelet[2534]: I0702 08:25:47.478355 2534 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 08:25:47.491880 kubelet[2534]: I0702 08:25:47.491829 2534 topology_manager.go:215] "Topology Admit Handler" podUID="ecfc2cd068e3630a71320fa7e48fe174" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 08:25:47.491964 kubelet[2534]: I0702 08:25:47.491943 2534 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 08:25:47.491999 kubelet[2534]: I0702 08:25:47.491982 2534 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 08:25:47.528561 kubelet[2534]: I0702 08:25:47.528514 2534 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 08:25:47.528786 kubelet[2534]: I0702 08:25:47.528612 2534 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 08:25:47.575729 kubelet[2534]: I0702 08:25:47.575665 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ecfc2cd068e3630a71320fa7e48fe174-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfc2cd068e3630a71320fa7e48fe174\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:25:47.676282 kubelet[2534]: I0702 08:25:47.676243 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ecfc2cd068e3630a71320fa7e48fe174-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ecfc2cd068e3630a71320fa7e48fe174\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:25:47.676440 kubelet[2534]: I0702 08:25:47.676287 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:47.676440 kubelet[2534]: I0702 08:25:47.676311 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 08:25:47.676440 kubelet[2534]: I0702 08:25:47.676332 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ecfc2cd068e3630a71320fa7e48fe174-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ecfc2cd068e3630a71320fa7e48fe174\") " pod="kube-system/kube-apiserver-localhost" Jul 2 08:25:47.676440 kubelet[2534]: I0702 08:25:47.676348 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:47.676836 kubelet[2534]: I0702 08:25:47.676670 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:47.676836 kubelet[2534]: I0702 08:25:47.676747 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:47.676836 kubelet[2534]: I0702 08:25:47.676771 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 08:25:47.830309 kubelet[2534]: E0702 08:25:47.830108 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:47.830309 kubelet[2534]: E0702 08:25:47.830162 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:47.830309 kubelet[2534]: E0702 08:25:47.830274 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:48.367001 kubelet[2534]: I0702 08:25:48.366944 2534 apiserver.go:52] "Watching apiserver" Jul 2 08:25:48.376018 kubelet[2534]: I0702 08:25:48.375976 2534 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 08:25:48.417307 kubelet[2534]: E0702 08:25:48.416889 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:48.417307 kubelet[2534]: E0702 08:25:48.417145 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:48.418366 kubelet[2534]: E0702 08:25:48.418296 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:48.483531 kubelet[2534]: I0702 08:25:48.483321 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.483307859 podStartE2EDuration="1.483307859s" podCreationTimestamp="2024-07-02 08:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:25:48.483163837 +0000 UTC m=+1.165393930" watchObservedRunningTime="2024-07-02 08:25:48.483307859 +0000 UTC m=+1.165537952" Jul 2 08:25:48.491468 kubelet[2534]: I0702 08:25:48.491324 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.491309285 podStartE2EDuration="1.491309285s" podCreationTimestamp="2024-07-02 08:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:25:48.490288483 +0000 UTC m=+1.172518576" watchObservedRunningTime="2024-07-02 08:25:48.491309285 +0000 UTC m=+1.173539378" Jul 2 08:25:48.497914 kubelet[2534]: I0702 08:25:48.497794 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.497782769 podStartE2EDuration="1.497782769s" podCreationTimestamp="2024-07-02 08:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:25:48.497548787 +0000 UTC m=+1.179778840" watchObservedRunningTime="2024-07-02 08:25:48.497782769 +0000 UTC m=+1.180012862" Jul 2 08:25:49.419026 kubelet[2534]: E0702 08:25:49.418991 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:50.419844 kubelet[2534]: E0702 08:25:50.419814 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:51.861347 sudo[1624]: pam_unix(sudo:session): session closed for user root Jul 2 08:25:51.862753 sshd[1621]: pam_unix(sshd:session): session closed for user core Jul 2 08:25:51.866411 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:44280.service: Deactivated successfully. Jul 2 08:25:51.869190 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 08:25:51.869539 systemd[1]: session-7.scope: Consumed 6.181s CPU time, 136.1M memory peak, 0B memory swap peak. Jul 2 08:25:51.870277 systemd-logind[1428]: Session 7 logged out. Waiting for processes to exit. Jul 2 08:25:51.871184 systemd-logind[1428]: Removed session 7. Jul 2 08:25:52.657317 kubelet[2534]: E0702 08:25:52.656724 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:56.548959 kubelet[2534]: E0702 08:25:56.548734 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:57.429755 kubelet[2534]: E0702 08:25:57.429239 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:58.532822 kubelet[2534]: E0702 08:25:58.532787 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:25:59.432785 kubelet[2534]: E0702 08:25:59.432745 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:01.137821 update_engine[1436]: I0702 08:26:01.137771 1436 update_attempter.cc:509] Updating boot flags... Jul 2 08:26:01.156735 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2630) Jul 2 08:26:01.181769 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2634) Jul 2 08:26:01.210790 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2634) Jul 2 08:26:02.664081 kubelet[2534]: E0702 08:26:02.664007 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:02.976937 kubelet[2534]: I0702 08:26:02.976827 2534 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 08:26:02.977683 containerd[1450]: time="2024-07-02T08:26:02.977641004Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 08:26:02.977987 kubelet[2534]: I0702 08:26:02.977871 2534 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 08:26:03.845402 kubelet[2534]: I0702 08:26:03.845359 2534 topology_manager.go:215] "Topology Admit Handler" podUID="261dfe78-a66b-402d-945d-7a195cb085c3" podNamespace="kube-system" podName="kube-proxy-2h6hz" Jul 2 08:26:03.855522 systemd[1]: Created slice kubepods-besteffort-pod261dfe78_a66b_402d_945d_7a195cb085c3.slice - libcontainer container kubepods-besteffort-pod261dfe78_a66b_402d_945d_7a195cb085c3.slice. Jul 2 08:26:03.875109 kubelet[2534]: I0702 08:26:03.874960 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/261dfe78-a66b-402d-945d-7a195cb085c3-kube-proxy\") pod \"kube-proxy-2h6hz\" (UID: \"261dfe78-a66b-402d-945d-7a195cb085c3\") " pod="kube-system/kube-proxy-2h6hz" Jul 2 08:26:03.875109 kubelet[2534]: I0702 08:26:03.875002 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/261dfe78-a66b-402d-945d-7a195cb085c3-xtables-lock\") pod \"kube-proxy-2h6hz\" (UID: \"261dfe78-a66b-402d-945d-7a195cb085c3\") " pod="kube-system/kube-proxy-2h6hz" Jul 2 08:26:03.875109 kubelet[2534]: I0702 08:26:03.875022 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/261dfe78-a66b-402d-945d-7a195cb085c3-lib-modules\") pod \"kube-proxy-2h6hz\" (UID: \"261dfe78-a66b-402d-945d-7a195cb085c3\") " pod="kube-system/kube-proxy-2h6hz" Jul 2 08:26:03.875109 kubelet[2534]: I0702 08:26:03.875061 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4cpn\" (UniqueName: \"kubernetes.io/projected/261dfe78-a66b-402d-945d-7a195cb085c3-kube-api-access-t4cpn\") pod \"kube-proxy-2h6hz\" (UID: \"261dfe78-a66b-402d-945d-7a195cb085c3\") " pod="kube-system/kube-proxy-2h6hz" Jul 2 08:26:04.012853 kubelet[2534]: I0702 08:26:04.011271 2534 topology_manager.go:215] "Topology Admit Handler" podUID="d2ca8333-4ccb-475b-9188-7b67af92a42a" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-tnf5n" Jul 2 08:26:04.019209 systemd[1]: Created slice kubepods-besteffort-podd2ca8333_4ccb_475b_9188_7b67af92a42a.slice - libcontainer container kubepods-besteffort-podd2ca8333_4ccb_475b_9188_7b67af92a42a.slice. Jul 2 08:26:04.076911 kubelet[2534]: I0702 08:26:04.076816 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d2ca8333-4ccb-475b-9188-7b67af92a42a-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-tnf5n\" (UID: \"d2ca8333-4ccb-475b-9188-7b67af92a42a\") " pod="tigera-operator/tigera-operator-76ff79f7fd-tnf5n" Jul 2 08:26:04.076911 kubelet[2534]: I0702 08:26:04.076858 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwjzz\" (UniqueName: \"kubernetes.io/projected/d2ca8333-4ccb-475b-9188-7b67af92a42a-kube-api-access-qwjzz\") pod \"tigera-operator-76ff79f7fd-tnf5n\" (UID: \"d2ca8333-4ccb-475b-9188-7b67af92a42a\") " pod="tigera-operator/tigera-operator-76ff79f7fd-tnf5n" Jul 2 08:26:04.164885 kubelet[2534]: E0702 08:26:04.164845 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:04.165953 containerd[1450]: time="2024-07-02T08:26:04.165539546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2h6hz,Uid:261dfe78-a66b-402d-945d-7a195cb085c3,Namespace:kube-system,Attempt:0,}" Jul 2 08:26:04.191143 containerd[1450]: time="2024-07-02T08:26:04.191049232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:04.191295 containerd[1450]: time="2024-07-02T08:26:04.191118325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:04.191295 containerd[1450]: time="2024-07-02T08:26:04.191138849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:04.191295 containerd[1450]: time="2024-07-02T08:26:04.191153252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:04.215936 systemd[1]: Started cri-containerd-d53aa036edd827aac71eac7ece7395b3a53de0f3c904eefd741510a270e67de3.scope - libcontainer container d53aa036edd827aac71eac7ece7395b3a53de0f3c904eefd741510a270e67de3. Jul 2 08:26:04.233303 containerd[1450]: time="2024-07-02T08:26:04.233259053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2h6hz,Uid:261dfe78-a66b-402d-945d-7a195cb085c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"d53aa036edd827aac71eac7ece7395b3a53de0f3c904eefd741510a270e67de3\"" Jul 2 08:26:04.234069 kubelet[2534]: E0702 08:26:04.234034 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:04.237004 containerd[1450]: time="2024-07-02T08:26:04.236865241Z" level=info msg="CreateContainer within sandbox \"d53aa036edd827aac71eac7ece7395b3a53de0f3c904eefd741510a270e67de3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 08:26:04.247611 containerd[1450]: time="2024-07-02T08:26:04.247569224Z" level=info msg="CreateContainer within sandbox \"d53aa036edd827aac71eac7ece7395b3a53de0f3c904eefd741510a270e67de3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a5ac4572c0cd0fb6ae9a646fe0e1c3e8852a69bf8ce49cdb81676d76d9ad8f9\"" Jul 2 08:26:04.248156 containerd[1450]: time="2024-07-02T08:26:04.248123526Z" level=info msg="StartContainer for \"0a5ac4572c0cd0fb6ae9a646fe0e1c3e8852a69bf8ce49cdb81676d76d9ad8f9\"" Jul 2 08:26:04.268858 systemd[1]: Started cri-containerd-0a5ac4572c0cd0fb6ae9a646fe0e1c3e8852a69bf8ce49cdb81676d76d9ad8f9.scope - libcontainer container 0a5ac4572c0cd0fb6ae9a646fe0e1c3e8852a69bf8ce49cdb81676d76d9ad8f9. Jul 2 08:26:04.290279 containerd[1450]: time="2024-07-02T08:26:04.290236929Z" level=info msg="StartContainer for \"0a5ac4572c0cd0fb6ae9a646fe0e1c3e8852a69bf8ce49cdb81676d76d9ad8f9\" returns successfully" Jul 2 08:26:04.322367 containerd[1450]: time="2024-07-02T08:26:04.322313871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-tnf5n,Uid:d2ca8333-4ccb-475b-9188-7b67af92a42a,Namespace:tigera-operator,Attempt:0,}" Jul 2 08:26:04.341435 containerd[1450]: time="2024-07-02T08:26:04.341360600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:04.341435 containerd[1450]: time="2024-07-02T08:26:04.341412730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:04.341630 containerd[1450]: time="2024-07-02T08:26:04.341431613Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:04.341630 containerd[1450]: time="2024-07-02T08:26:04.341447616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:04.358899 systemd[1]: Started cri-containerd-50dc6baf197c3dfee165b3968360db8c33c30808d21d21995cffa944249fce2e.scope - libcontainer container 50dc6baf197c3dfee165b3968360db8c33c30808d21d21995cffa944249fce2e. Jul 2 08:26:04.396408 containerd[1450]: time="2024-07-02T08:26:04.396266292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-tnf5n,Uid:d2ca8333-4ccb-475b-9188-7b67af92a42a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"50dc6baf197c3dfee165b3968360db8c33c30808d21d21995cffa944249fce2e\"" Jul 2 08:26:04.400885 containerd[1450]: time="2024-07-02T08:26:04.400650024Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 08:26:04.442542 kubelet[2534]: E0702 08:26:04.441727 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:04.448954 kubelet[2534]: I0702 08:26:04.448776 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2h6hz" podStartSLOduration=1.448761578 podStartE2EDuration="1.448761578s" podCreationTimestamp="2024-07-02 08:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:26:04.448566262 +0000 UTC m=+17.130796395" watchObservedRunningTime="2024-07-02 08:26:04.448761578 +0000 UTC m=+17.130991631" Jul 2 08:26:05.292839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3767593866.mount: Deactivated successfully. Jul 2 08:26:05.740150 containerd[1450]: time="2024-07-02T08:26:05.740102252Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:05.740626 containerd[1450]: time="2024-07-02T08:26:05.740582897Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473622" Jul 2 08:26:05.741468 containerd[1450]: time="2024-07-02T08:26:05.741436328Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:05.744609 containerd[1450]: time="2024-07-02T08:26:05.744559760Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:05.745301 containerd[1450]: time="2024-07-02T08:26:05.745276246Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.344580053s" Jul 2 08:26:05.745480 containerd[1450]: time="2024-07-02T08:26:05.745378584Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 08:26:05.747485 containerd[1450]: time="2024-07-02T08:26:05.747341451Z" level=info msg="CreateContainer within sandbox \"50dc6baf197c3dfee165b3968360db8c33c30808d21d21995cffa944249fce2e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 08:26:05.757600 containerd[1450]: time="2024-07-02T08:26:05.757558296Z" level=info msg="CreateContainer within sandbox \"50dc6baf197c3dfee165b3968360db8c33c30808d21d21995cffa944249fce2e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4a58ce1d5652b2eca62c3565b2da9fe60a64f54c35574ad2ad58e7b143774a02\"" Jul 2 08:26:05.758079 containerd[1450]: time="2024-07-02T08:26:05.758008456Z" level=info msg="StartContainer for \"4a58ce1d5652b2eca62c3565b2da9fe60a64f54c35574ad2ad58e7b143774a02\"" Jul 2 08:26:05.780895 systemd[1]: Started cri-containerd-4a58ce1d5652b2eca62c3565b2da9fe60a64f54c35574ad2ad58e7b143774a02.scope - libcontainer container 4a58ce1d5652b2eca62c3565b2da9fe60a64f54c35574ad2ad58e7b143774a02. Jul 2 08:26:05.799061 containerd[1450]: time="2024-07-02T08:26:05.799024303Z" level=info msg="StartContainer for \"4a58ce1d5652b2eca62c3565b2da9fe60a64f54c35574ad2ad58e7b143774a02\" returns successfully" Jul 2 08:26:07.403805 kubelet[2534]: I0702 08:26:07.403636 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-tnf5n" podStartSLOduration=3.055679141 podStartE2EDuration="4.403621609s" podCreationTimestamp="2024-07-02 08:26:03 +0000 UTC" firstStartedPulling="2024-07-02 08:26:04.398206412 +0000 UTC m=+17.080436505" lastFinishedPulling="2024-07-02 08:26:05.74614888 +0000 UTC m=+18.428378973" observedRunningTime="2024-07-02 08:26:06.456501677 +0000 UTC m=+19.138731730" watchObservedRunningTime="2024-07-02 08:26:07.403621609 +0000 UTC m=+20.085851702" Jul 2 08:26:09.102422 kubelet[2534]: I0702 08:26:09.102371 2534 topology_manager.go:215] "Topology Admit Handler" podUID="1be63360-f5dc-4767-93b6-c76cde87959c" podNamespace="calico-system" podName="calico-typha-566647f96f-nrsrh" Jul 2 08:26:09.109337 kubelet[2534]: I0702 08:26:09.109192 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1be63360-f5dc-4767-93b6-c76cde87959c-tigera-ca-bundle\") pod \"calico-typha-566647f96f-nrsrh\" (UID: \"1be63360-f5dc-4767-93b6-c76cde87959c\") " pod="calico-system/calico-typha-566647f96f-nrsrh" Jul 2 08:26:09.109337 kubelet[2534]: I0702 08:26:09.109253 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1be63360-f5dc-4767-93b6-c76cde87959c-typha-certs\") pod \"calico-typha-566647f96f-nrsrh\" (UID: \"1be63360-f5dc-4767-93b6-c76cde87959c\") " pod="calico-system/calico-typha-566647f96f-nrsrh" Jul 2 08:26:09.109337 kubelet[2534]: I0702 08:26:09.109276 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tspvp\" (UniqueName: \"kubernetes.io/projected/1be63360-f5dc-4767-93b6-c76cde87959c-kube-api-access-tspvp\") pod \"calico-typha-566647f96f-nrsrh\" (UID: \"1be63360-f5dc-4767-93b6-c76cde87959c\") " pod="calico-system/calico-typha-566647f96f-nrsrh" Jul 2 08:26:09.116073 systemd[1]: Created slice kubepods-besteffort-pod1be63360_f5dc_4767_93b6_c76cde87959c.slice - libcontainer container kubepods-besteffort-pod1be63360_f5dc_4767_93b6_c76cde87959c.slice. Jul 2 08:26:09.151719 kubelet[2534]: I0702 08:26:09.151664 2534 topology_manager.go:215] "Topology Admit Handler" podUID="0aad8e29-e628-4f8a-8fb0-6a8507dc6c86" podNamespace="calico-system" podName="calico-node-l9pzc" Jul 2 08:26:09.160395 systemd[1]: Created slice kubepods-besteffort-pod0aad8e29_e628_4f8a_8fb0_6a8507dc6c86.slice - libcontainer container kubepods-besteffort-pod0aad8e29_e628_4f8a_8fb0_6a8507dc6c86.slice. Jul 2 08:26:09.210565 kubelet[2534]: I0702 08:26:09.210477 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-lib-modules\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210565 kubelet[2534]: I0702 08:26:09.210518 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-node-certs\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210565 kubelet[2534]: I0702 08:26:09.210538 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-cni-log-dir\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210565 kubelet[2534]: I0702 08:26:09.210555 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-flexvol-driver-host\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210565 kubelet[2534]: I0702 08:26:09.210574 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-xtables-lock\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210809 kubelet[2534]: I0702 08:26:09.210590 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-var-run-calico\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210809 kubelet[2534]: I0702 08:26:09.210607 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-var-lib-calico\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210809 kubelet[2534]: I0702 08:26:09.210624 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqw7\" (UniqueName: \"kubernetes.io/projected/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-kube-api-access-8bqw7\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210809 kubelet[2534]: I0702 08:26:09.210638 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-tigera-ca-bundle\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210809 kubelet[2534]: I0702 08:26:09.210652 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-cni-bin-dir\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210918 kubelet[2534]: I0702 08:26:09.210666 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-cni-net-dir\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.210918 kubelet[2534]: I0702 08:26:09.210694 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0aad8e29-e628-4f8a-8fb0-6a8507dc6c86-policysync\") pod \"calico-node-l9pzc\" (UID: \"0aad8e29-e628-4f8a-8fb0-6a8507dc6c86\") " pod="calico-system/calico-node-l9pzc" Jul 2 08:26:09.264895 kubelet[2534]: I0702 08:26:09.264805 2534 topology_manager.go:215] "Topology Admit Handler" podUID="84bf0464-f310-4576-b2d3-b41310345c86" podNamespace="calico-system" podName="csi-node-driver-7gjkg" Jul 2 08:26:09.265246 kubelet[2534]: E0702 08:26:09.265209 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjkg" podUID="84bf0464-f310-4576-b2d3-b41310345c86" Jul 2 08:26:09.311730 kubelet[2534]: I0702 08:26:09.311678 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84bf0464-f310-4576-b2d3-b41310345c86-kubelet-dir\") pod \"csi-node-driver-7gjkg\" (UID: \"84bf0464-f310-4576-b2d3-b41310345c86\") " pod="calico-system/csi-node-driver-7gjkg" Jul 2 08:26:09.311855 kubelet[2534]: I0702 08:26:09.311765 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84bf0464-f310-4576-b2d3-b41310345c86-socket-dir\") pod \"csi-node-driver-7gjkg\" (UID: \"84bf0464-f310-4576-b2d3-b41310345c86\") " pod="calico-system/csi-node-driver-7gjkg" Jul 2 08:26:09.311855 kubelet[2534]: I0702 08:26:09.311818 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84bf0464-f310-4576-b2d3-b41310345c86-registration-dir\") pod \"csi-node-driver-7gjkg\" (UID: \"84bf0464-f310-4576-b2d3-b41310345c86\") " pod="calico-system/csi-node-driver-7gjkg" Jul 2 08:26:09.311855 kubelet[2534]: I0702 08:26:09.311837 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw45h\" (UniqueName: \"kubernetes.io/projected/84bf0464-f310-4576-b2d3-b41310345c86-kube-api-access-kw45h\") pod \"csi-node-driver-7gjkg\" (UID: \"84bf0464-f310-4576-b2d3-b41310345c86\") " pod="calico-system/csi-node-driver-7gjkg" Jul 2 08:26:09.311855 kubelet[2534]: I0702 08:26:09.311853 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/84bf0464-f310-4576-b2d3-b41310345c86-varrun\") pod \"csi-node-driver-7gjkg\" (UID: \"84bf0464-f310-4576-b2d3-b41310345c86\") " pod="calico-system/csi-node-driver-7gjkg" Jul 2 08:26:09.313374 kubelet[2534]: E0702 08:26:09.313335 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.313374 kubelet[2534]: W0702 08:26:09.313365 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.313496 kubelet[2534]: E0702 08:26:09.313382 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.318216 kubelet[2534]: E0702 08:26:09.318195 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.318216 kubelet[2534]: W0702 08:26:09.318214 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.318318 kubelet[2534]: E0702 08:26:09.318229 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.325557 kubelet[2534]: E0702 08:26:09.325462 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.325557 kubelet[2534]: W0702 08:26:09.325493 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.325557 kubelet[2534]: E0702 08:26:09.325512 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.412931 kubelet[2534]: E0702 08:26:09.412896 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.412931 kubelet[2534]: W0702 08:26:09.412923 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.413112 kubelet[2534]: E0702 08:26:09.412947 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.413211 kubelet[2534]: E0702 08:26:09.413197 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.413211 kubelet[2534]: W0702 08:26:09.413210 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.413272 kubelet[2534]: E0702 08:26:09.413227 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.414204 kubelet[2534]: E0702 08:26:09.413412 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.414204 kubelet[2534]: W0702 08:26:09.414069 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.414204 kubelet[2534]: E0702 08:26:09.414088 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.414400 kubelet[2534]: E0702 08:26:09.414389 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.414456 kubelet[2534]: W0702 08:26:09.414447 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.414595 kubelet[2534]: E0702 08:26:09.414560 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.414701 kubelet[2534]: E0702 08:26:09.414691 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.414815 kubelet[2534]: W0702 08:26:09.414769 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.415052 kubelet[2534]: E0702 08:26:09.415020 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.415391 kubelet[2534]: E0702 08:26:09.415328 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.415391 kubelet[2534]: W0702 08:26:09.415341 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.415601 kubelet[2534]: E0702 08:26:09.415516 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.415704 kubelet[2534]: E0702 08:26:09.415694 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.415816 kubelet[2534]: W0702 08:26:09.415774 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.415898 kubelet[2534]: E0702 08:26:09.415874 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.416105 kubelet[2534]: E0702 08:26:09.416059 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.416105 kubelet[2534]: W0702 08:26:09.416070 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.416253 kubelet[2534]: E0702 08:26:09.416192 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.416752 kubelet[2534]: E0702 08:26:09.416642 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.416752 kubelet[2534]: W0702 08:26:09.416653 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.416752 kubelet[2534]: E0702 08:26:09.416681 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.417011 kubelet[2534]: E0702 08:26:09.416977 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.417011 kubelet[2534]: W0702 08:26:09.416994 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.417241 kubelet[2534]: E0702 08:26:09.417158 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.417347 kubelet[2534]: E0702 08:26:09.417338 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.417503 kubelet[2534]: W0702 08:26:09.417401 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.417587 kubelet[2534]: E0702 08:26:09.417575 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.417720 kubelet[2534]: E0702 08:26:09.417680 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.417720 kubelet[2534]: W0702 08:26:09.417690 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.417909 kubelet[2534]: E0702 08:26:09.417821 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.418054 kubelet[2534]: E0702 08:26:09.418044 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.418148 kubelet[2534]: W0702 08:26:09.418105 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.418262 kubelet[2534]: E0702 08:26:09.418197 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.418348 kubelet[2534]: E0702 08:26:09.418339 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.418393 kubelet[2534]: W0702 08:26:09.418384 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.418501 kubelet[2534]: E0702 08:26:09.418490 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.419222 kubelet[2534]: E0702 08:26:09.419135 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.419222 kubelet[2534]: W0702 08:26:09.419147 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.419622 kubelet[2534]: E0702 08:26:09.419437 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.419622 kubelet[2534]: E0702 08:26:09.419543 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.419622 kubelet[2534]: W0702 08:26:09.419550 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.419839 kubelet[2534]: E0702 08:26:09.419764 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.420007 kubelet[2534]: E0702 08:26:09.419928 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.420007 kubelet[2534]: W0702 08:26:09.419948 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.420007 kubelet[2534]: E0702 08:26:09.419987 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.420728 kubelet[2534]: E0702 08:26:09.420440 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.420728 kubelet[2534]: W0702 08:26:09.420453 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.420728 kubelet[2534]: E0702 08:26:09.420506 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.421092 kubelet[2534]: E0702 08:26:09.420854 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.421092 kubelet[2534]: W0702 08:26:09.420863 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.421092 kubelet[2534]: E0702 08:26:09.420944 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.421436 kubelet[2534]: E0702 08:26:09.421347 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:09.421856 kubelet[2534]: E0702 08:26:09.421766 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.421856 kubelet[2534]: W0702 08:26:09.421779 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.421856 kubelet[2534]: E0702 08:26:09.421812 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.422872 kubelet[2534]: E0702 08:26:09.422002 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.422872 kubelet[2534]: W0702 08:26:09.422010 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.422872 kubelet[2534]: E0702 08:26:09.422090 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.422872 kubelet[2534]: E0702 08:26:09.422192 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.422872 kubelet[2534]: W0702 08:26:09.422199 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.422872 kubelet[2534]: E0702 08:26:09.422256 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.422872 kubelet[2534]: E0702 08:26:09.422354 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.422872 kubelet[2534]: W0702 08:26:09.422361 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.422872 kubelet[2534]: E0702 08:26:09.422409 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.422872 kubelet[2534]: E0702 08:26:09.422567 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.423070 containerd[1450]: time="2024-07-02T08:26:09.421775617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566647f96f-nrsrh,Uid:1be63360-f5dc-4767-93b6-c76cde87959c,Namespace:calico-system,Attempt:0,}" Jul 2 08:26:09.423464 kubelet[2534]: W0702 08:26:09.422575 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.423464 kubelet[2534]: E0702 08:26:09.422589 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.423464 kubelet[2534]: E0702 08:26:09.423306 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.423464 kubelet[2534]: W0702 08:26:09.423328 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.423464 kubelet[2534]: E0702 08:26:09.423341 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.434955 kubelet[2534]: E0702 08:26:09.434760 2534 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 08:26:09.434955 kubelet[2534]: W0702 08:26:09.434803 2534 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 08:26:09.434955 kubelet[2534]: E0702 08:26:09.434823 2534 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 08:26:09.464599 kubelet[2534]: E0702 08:26:09.464198 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:09.465040 containerd[1450]: time="2024-07-02T08:26:09.464961505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9pzc,Uid:0aad8e29-e628-4f8a-8fb0-6a8507dc6c86,Namespace:calico-system,Attempt:0,}" Jul 2 08:26:09.510648 containerd[1450]: time="2024-07-02T08:26:09.510201976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:09.511403 containerd[1450]: time="2024-07-02T08:26:09.511347225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:09.511522 containerd[1450]: time="2024-07-02T08:26:09.511381390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:09.511522 containerd[1450]: time="2024-07-02T08:26:09.511505048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:09.513792 containerd[1450]: time="2024-07-02T08:26:09.513354121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:09.513792 containerd[1450]: time="2024-07-02T08:26:09.513763101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:09.513972 containerd[1450]: time="2024-07-02T08:26:09.513779303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:09.513972 containerd[1450]: time="2024-07-02T08:26:09.513808548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:09.534976 systemd[1]: Started cri-containerd-02dbfc4048037adcd40e941424277e78fb239ea3da116e5e6f1b76e74260a736.scope - libcontainer container 02dbfc4048037adcd40e941424277e78fb239ea3da116e5e6f1b76e74260a736. Jul 2 08:26:09.539207 systemd[1]: Started cri-containerd-3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45.scope - libcontainer container 3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45. Jul 2 08:26:09.586629 containerd[1450]: time="2024-07-02T08:26:09.586587159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l9pzc,Uid:0aad8e29-e628-4f8a-8fb0-6a8507dc6c86,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45\"" Jul 2 08:26:09.590286 kubelet[2534]: E0702 08:26:09.590262 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:09.596262 containerd[1450]: time="2024-07-02T08:26:09.595598488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-566647f96f-nrsrh,Uid:1be63360-f5dc-4767-93b6-c76cde87959c,Namespace:calico-system,Attempt:0,} returns sandbox id \"02dbfc4048037adcd40e941424277e78fb239ea3da116e5e6f1b76e74260a736\"" Jul 2 08:26:09.596364 kubelet[2534]: E0702 08:26:09.596342 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:09.597199 containerd[1450]: time="2024-07-02T08:26:09.597169679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 08:26:10.631671 containerd[1450]: time="2024-07-02T08:26:10.631588410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:10.632187 containerd[1450]: time="2024-07-02T08:26:10.632155370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 08:26:10.633019 containerd[1450]: time="2024-07-02T08:26:10.632983047Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:10.634925 containerd[1450]: time="2024-07-02T08:26:10.634881596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:10.635729 containerd[1450]: time="2024-07-02T08:26:10.635637822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.038435378s" Jul 2 08:26:10.635729 containerd[1450]: time="2024-07-02T08:26:10.635679188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 08:26:10.636905 containerd[1450]: time="2024-07-02T08:26:10.636770782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 08:26:10.639119 containerd[1450]: time="2024-07-02T08:26:10.639072868Z" level=info msg="CreateContainer within sandbox \"3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 08:26:10.651908 containerd[1450]: time="2024-07-02T08:26:10.651844552Z" level=info msg="CreateContainer within sandbox \"3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1\"" Jul 2 08:26:10.652435 containerd[1450]: time="2024-07-02T08:26:10.652400590Z" level=info msg="StartContainer for \"711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1\"" Jul 2 08:26:10.685235 systemd[1]: Started cri-containerd-711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1.scope - libcontainer container 711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1. Jul 2 08:26:10.710966 containerd[1450]: time="2024-07-02T08:26:10.710897652Z" level=info msg="StartContainer for \"711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1\" returns successfully" Jul 2 08:26:10.753489 systemd[1]: cri-containerd-711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1.scope: Deactivated successfully. Jul 2 08:26:10.771980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1-rootfs.mount: Deactivated successfully. Jul 2 08:26:10.782892 containerd[1450]: time="2024-07-02T08:26:10.782839534Z" level=info msg="shim disconnected" id=711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1 namespace=k8s.io Jul 2 08:26:10.782892 containerd[1450]: time="2024-07-02T08:26:10.782888661Z" level=warning msg="cleaning up after shim disconnected" id=711fdf12636e2b4a52e40885066d1164f013df64dec1d0afdf9bdd611e858dc1 namespace=k8s.io Jul 2 08:26:10.782892 containerd[1450]: time="2024-07-02T08:26:10.782897062Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:26:11.394180 kubelet[2534]: E0702 08:26:11.394140 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjkg" podUID="84bf0464-f310-4576-b2d3-b41310345c86" Jul 2 08:26:11.468057 kubelet[2534]: E0702 08:26:11.467880 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:13.080402 containerd[1450]: time="2024-07-02T08:26:13.079963412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:13.080946 containerd[1450]: time="2024-07-02T08:26:13.080584650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 08:26:13.081328 containerd[1450]: time="2024-07-02T08:26:13.081265295Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:13.083468 containerd[1450]: time="2024-07-02T08:26:13.083396601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:13.084793 containerd[1450]: time="2024-07-02T08:26:13.084423689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.447510687s" Jul 2 08:26:13.084793 containerd[1450]: time="2024-07-02T08:26:13.084462014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 08:26:13.085736 containerd[1450]: time="2024-07-02T08:26:13.085695208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 08:26:13.094740 containerd[1450]: time="2024-07-02T08:26:13.094486586Z" level=info msg="CreateContainer within sandbox \"02dbfc4048037adcd40e941424277e78fb239ea3da116e5e6f1b76e74260a736\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 08:26:13.104515 containerd[1450]: time="2024-07-02T08:26:13.104477033Z" level=info msg="CreateContainer within sandbox \"02dbfc4048037adcd40e941424277e78fb239ea3da116e5e6f1b76e74260a736\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62b334deb28e7965ee76519b21af9cc23795bf564f23d73d410316073a636bfd\"" Jul 2 08:26:13.104898 containerd[1450]: time="2024-07-02T08:26:13.104848880Z" level=info msg="StartContainer for \"62b334deb28e7965ee76519b21af9cc23795bf564f23d73d410316073a636bfd\"" Jul 2 08:26:13.136854 systemd[1]: Started cri-containerd-62b334deb28e7965ee76519b21af9cc23795bf564f23d73d410316073a636bfd.scope - libcontainer container 62b334deb28e7965ee76519b21af9cc23795bf564f23d73d410316073a636bfd. Jul 2 08:26:13.177046 containerd[1450]: time="2024-07-02T08:26:13.177005649Z" level=info msg="StartContainer for \"62b334deb28e7965ee76519b21af9cc23795bf564f23d73d410316073a636bfd\" returns successfully" Jul 2 08:26:13.391879 kubelet[2534]: E0702 08:26:13.391324 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjkg" podUID="84bf0464-f310-4576-b2d3-b41310345c86" Jul 2 08:26:13.480755 kubelet[2534]: E0702 08:26:13.480621 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:13.498579 kubelet[2534]: I0702 08:26:13.498474 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-566647f96f-nrsrh" podStartSLOduration=1.010592668 podStartE2EDuration="4.498456425s" podCreationTimestamp="2024-07-02 08:26:09 +0000 UTC" firstStartedPulling="2024-07-02 08:26:09.5975835 +0000 UTC m=+22.279813593" lastFinishedPulling="2024-07-02 08:26:13.085447257 +0000 UTC m=+25.767677350" observedRunningTime="2024-07-02 08:26:13.497764579 +0000 UTC m=+26.179994672" watchObservedRunningTime="2024-07-02 08:26:13.498456425 +0000 UTC m=+26.180686518" Jul 2 08:26:14.482968 kubelet[2534]: I0702 08:26:14.482889 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:26:14.483787 kubelet[2534]: E0702 08:26:14.483741 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:15.391777 kubelet[2534]: E0702 08:26:15.391638 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7gjkg" podUID="84bf0464-f310-4576-b2d3-b41310345c86" Jul 2 08:26:16.249835 containerd[1450]: time="2024-07-02T08:26:16.249605566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:16.250864 containerd[1450]: time="2024-07-02T08:26:16.250674925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 08:26:16.251728 containerd[1450]: time="2024-07-02T08:26:16.251672316Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:16.254290 containerd[1450]: time="2024-07-02T08:26:16.254256764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:16.255251 containerd[1450]: time="2024-07-02T08:26:16.254932640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.169033846s" Jul 2 08:26:16.255251 containerd[1450]: time="2024-07-02T08:26:16.254973844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 08:26:16.256928 containerd[1450]: time="2024-07-02T08:26:16.256896778Z" level=info msg="CreateContainer within sandbox \"3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 08:26:16.272117 containerd[1450]: time="2024-07-02T08:26:16.272072068Z" level=info msg="CreateContainer within sandbox \"3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a\"" Jul 2 08:26:16.274981 containerd[1450]: time="2024-07-02T08:26:16.272487274Z" level=info msg="StartContainer for \"77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a\"" Jul 2 08:26:16.302861 systemd[1]: Started cri-containerd-77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a.scope - libcontainer container 77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a. Jul 2 08:26:16.375118 containerd[1450]: time="2024-07-02T08:26:16.374339656Z" level=info msg="StartContainer for \"77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a\" returns successfully" Jul 2 08:26:16.490497 kubelet[2534]: E0702 08:26:16.490370 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:16.767394 containerd[1450]: time="2024-07-02T08:26:16.767305015Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 08:26:16.769136 systemd[1]: cri-containerd-77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a.scope: Deactivated successfully. Jul 2 08:26:16.786669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a-rootfs.mount: Deactivated successfully. Jul 2 08:26:16.798631 containerd[1450]: time="2024-07-02T08:26:16.798552255Z" level=info msg="shim disconnected" id=77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a namespace=k8s.io Jul 2 08:26:16.798631 containerd[1450]: time="2024-07-02T08:26:16.798632544Z" level=warning msg="cleaning up after shim disconnected" id=77eeb8c0f9b3d1a1a3c6eb470506acf0813e67b0b228f7ad4159203af44ec77a namespace=k8s.io Jul 2 08:26:16.798631 containerd[1450]: time="2024-07-02T08:26:16.798641865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 08:26:16.827476 kubelet[2534]: I0702 08:26:16.827442 2534 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 08:26:16.846567 kubelet[2534]: I0702 08:26:16.846239 2534 topology_manager.go:215] "Topology Admit Handler" podUID="d162ea36-1704-488d-8dd5-39e7c8e42991" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rljt6" Jul 2 08:26:16.846687 kubelet[2534]: I0702 08:26:16.846661 2534 topology_manager.go:215] "Topology Admit Handler" podUID="d914a969-86d7-48ab-ad5e-c9413cd5d500" podNamespace="calico-system" podName="calico-kube-controllers-855cc674f5-jdjkz" Jul 2 08:26:16.853217 kubelet[2534]: I0702 08:26:16.852690 2534 topology_manager.go:215] "Topology Admit Handler" podUID="c910d3f1-3683-45dd-a240-2bad751e8d45" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tvnp6" Jul 2 08:26:16.856421 systemd[1]: Created slice kubepods-burstable-podd162ea36_1704_488d_8dd5_39e7c8e42991.slice - libcontainer container kubepods-burstable-podd162ea36_1704_488d_8dd5_39e7c8e42991.slice. Jul 2 08:26:16.865136 systemd[1]: Created slice kubepods-besteffort-podd914a969_86d7_48ab_ad5e_c9413cd5d500.slice - libcontainer container kubepods-besteffort-podd914a969_86d7_48ab_ad5e_c9413cd5d500.slice. Jul 2 08:26:16.875825 systemd[1]: Created slice kubepods-burstable-podc910d3f1_3683_45dd_a240_2bad751e8d45.slice - libcontainer container kubepods-burstable-podc910d3f1_3683_45dd_a240_2bad751e8d45.slice. Jul 2 08:26:16.877250 kubelet[2534]: I0702 08:26:16.877187 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d162ea36-1704-488d-8dd5-39e7c8e42991-config-volume\") pod \"coredns-7db6d8ff4d-rljt6\" (UID: \"d162ea36-1704-488d-8dd5-39e7c8e42991\") " pod="kube-system/coredns-7db6d8ff4d-rljt6" Jul 2 08:26:16.877697 kubelet[2534]: I0702 08:26:16.877501 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c910d3f1-3683-45dd-a240-2bad751e8d45-config-volume\") pod \"coredns-7db6d8ff4d-tvnp6\" (UID: \"c910d3f1-3683-45dd-a240-2bad751e8d45\") " pod="kube-system/coredns-7db6d8ff4d-tvnp6" Jul 2 08:26:16.877697 kubelet[2534]: I0702 08:26:16.877528 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d914a969-86d7-48ab-ad5e-c9413cd5d500-tigera-ca-bundle\") pod \"calico-kube-controllers-855cc674f5-jdjkz\" (UID: \"d914a969-86d7-48ab-ad5e-c9413cd5d500\") " pod="calico-system/calico-kube-controllers-855cc674f5-jdjkz" Jul 2 08:26:16.877697 kubelet[2534]: I0702 08:26:16.877555 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl49z\" (UniqueName: \"kubernetes.io/projected/d162ea36-1704-488d-8dd5-39e7c8e42991-kube-api-access-wl49z\") pod \"coredns-7db6d8ff4d-rljt6\" (UID: \"d162ea36-1704-488d-8dd5-39e7c8e42991\") " pod="kube-system/coredns-7db6d8ff4d-rljt6" Jul 2 08:26:16.877697 kubelet[2534]: I0702 08:26:16.877589 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvf6\" (UniqueName: \"kubernetes.io/projected/d914a969-86d7-48ab-ad5e-c9413cd5d500-kube-api-access-4hvf6\") pod \"calico-kube-controllers-855cc674f5-jdjkz\" (UID: \"d914a969-86d7-48ab-ad5e-c9413cd5d500\") " pod="calico-system/calico-kube-controllers-855cc674f5-jdjkz" Jul 2 08:26:16.877697 kubelet[2534]: I0702 08:26:16.877620 2534 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62shl\" (UniqueName: \"kubernetes.io/projected/c910d3f1-3683-45dd-a240-2bad751e8d45-kube-api-access-62shl\") pod \"coredns-7db6d8ff4d-tvnp6\" (UID: \"c910d3f1-3683-45dd-a240-2bad751e8d45\") " pod="kube-system/coredns-7db6d8ff4d-tvnp6" Jul 2 08:26:17.162113 kubelet[2534]: E0702 08:26:17.162062 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:17.164488 containerd[1450]: time="2024-07-02T08:26:17.163835647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rljt6,Uid:d162ea36-1704-488d-8dd5-39e7c8e42991,Namespace:kube-system,Attempt:0,}" Jul 2 08:26:17.171488 containerd[1450]: time="2024-07-02T08:26:17.171124550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855cc674f5-jdjkz,Uid:d914a969-86d7-48ab-ad5e-c9413cd5d500,Namespace:calico-system,Attempt:0,}" Jul 2 08:26:17.181598 kubelet[2534]: E0702 08:26:17.181567 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:17.183478 containerd[1450]: time="2024-07-02T08:26:17.183263854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvnp6,Uid:c910d3f1-3683-45dd-a240-2bad751e8d45,Namespace:kube-system,Attempt:0,}" Jul 2 08:26:17.402819 systemd[1]: Created slice kubepods-besteffort-pod84bf0464_f310_4576_b2d3_b41310345c86.slice - libcontainer container kubepods-besteffort-pod84bf0464_f310_4576_b2d3_b41310345c86.slice. Jul 2 08:26:17.405505 containerd[1450]: time="2024-07-02T08:26:17.405448717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjkg,Uid:84bf0464-f310-4576-b2d3-b41310345c86,Namespace:calico-system,Attempt:0,}" Jul 2 08:26:17.422171 containerd[1450]: time="2024-07-02T08:26:17.422057741Z" level=error msg="Failed to destroy network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.425155 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16-shm.mount: Deactivated successfully. Jul 2 08:26:17.425291 containerd[1450]: time="2024-07-02T08:26:17.425228842Z" level=error msg="encountered an error cleaning up failed sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.425329 containerd[1450]: time="2024-07-02T08:26:17.425294529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvnp6,Uid:c910d3f1-3683-45dd-a240-2bad751e8d45,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.425547 kubelet[2534]: E0702 08:26:17.425491 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.425604 kubelet[2534]: E0702 08:26:17.425564 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tvnp6" Jul 2 08:26:17.425604 kubelet[2534]: E0702 08:26:17.425582 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tvnp6" Jul 2 08:26:17.425653 kubelet[2534]: E0702 08:26:17.425623 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tvnp6_kube-system(c910d3f1-3683-45dd-a240-2bad751e8d45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tvnp6_kube-system(c910d3f1-3683-45dd-a240-2bad751e8d45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tvnp6" podUID="c910d3f1-3683-45dd-a240-2bad751e8d45" Jul 2 08:26:17.426587 containerd[1450]: time="2024-07-02T08:26:17.426547143Z" level=error msg="Failed to destroy network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.426989 containerd[1450]: time="2024-07-02T08:26:17.426951307Z" level=error msg="encountered an error cleaning up failed sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.427050 containerd[1450]: time="2024-07-02T08:26:17.427007833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rljt6,Uid:d162ea36-1704-488d-8dd5-39e7c8e42991,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.428469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56-shm.mount: Deactivated successfully. Jul 2 08:26:17.430139 kubelet[2534]: E0702 08:26:17.430104 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.430225 kubelet[2534]: E0702 08:26:17.430155 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rljt6" Jul 2 08:26:17.430225 kubelet[2534]: E0702 08:26:17.430174 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rljt6" Jul 2 08:26:17.430225 kubelet[2534]: E0702 08:26:17.430208 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rljt6_kube-system(d162ea36-1704-488d-8dd5-39e7c8e42991)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rljt6_kube-system(d162ea36-1704-488d-8dd5-39e7c8e42991)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rljt6" podUID="d162ea36-1704-488d-8dd5-39e7c8e42991" Jul 2 08:26:17.438911 containerd[1450]: time="2024-07-02T08:26:17.438856106Z" level=error msg="Failed to destroy network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.439360 containerd[1450]: time="2024-07-02T08:26:17.439232506Z" level=error msg="encountered an error cleaning up failed sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.439360 containerd[1450]: time="2024-07-02T08:26:17.439286512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855cc674f5-jdjkz,Uid:d914a969-86d7-48ab-ad5e-c9413cd5d500,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.439528 kubelet[2534]: E0702 08:26:17.439484 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.439563 kubelet[2534]: E0702 08:26:17.439530 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855cc674f5-jdjkz" Jul 2 08:26:17.439563 kubelet[2534]: E0702 08:26:17.439550 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855cc674f5-jdjkz" Jul 2 08:26:17.439615 kubelet[2534]: E0702 08:26:17.439579 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-855cc674f5-jdjkz_calico-system(d914a969-86d7-48ab-ad5e-c9413cd5d500)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-855cc674f5-jdjkz_calico-system(d914a969-86d7-48ab-ad5e-c9413cd5d500)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855cc674f5-jdjkz" podUID="d914a969-86d7-48ab-ad5e-c9413cd5d500" Jul 2 08:26:17.467143 containerd[1450]: time="2024-07-02T08:26:17.467088018Z" level=error msg="Failed to destroy network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.467427 containerd[1450]: time="2024-07-02T08:26:17.467383370Z" level=error msg="encountered an error cleaning up failed sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.467484 containerd[1450]: time="2024-07-02T08:26:17.467437215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjkg,Uid:84bf0464-f310-4576-b2d3-b41310345c86,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.467750 kubelet[2534]: E0702 08:26:17.467617 2534 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.467750 kubelet[2534]: E0702 08:26:17.467665 2534 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gjkg" Jul 2 08:26:17.467750 kubelet[2534]: E0702 08:26:17.467681 2534 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7gjkg" Jul 2 08:26:17.467972 kubelet[2534]: E0702 08:26:17.467918 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7gjkg_calico-system(84bf0464-f310-4576-b2d3-b41310345c86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7gjkg_calico-system(84bf0464-f310-4576-b2d3-b41310345c86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gjkg" podUID="84bf0464-f310-4576-b2d3-b41310345c86" Jul 2 08:26:17.498026 kubelet[2534]: E0702 08:26:17.496869 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:17.498944 kubelet[2534]: I0702 08:26:17.498752 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:17.499657 containerd[1450]: time="2024-07-02T08:26:17.499532382Z" level=info msg="StopPodSandbox for \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\"" Jul 2 08:26:17.499831 containerd[1450]: time="2024-07-02T08:26:17.499757687Z" level=info msg="Ensure that sandbox 4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6 in task-service has been cleanup successfully" Jul 2 08:26:17.500409 kubelet[2534]: I0702 08:26:17.500387 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:17.501031 containerd[1450]: time="2024-07-02T08:26:17.500827482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 08:26:17.502883 kubelet[2534]: I0702 08:26:17.502678 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:17.504079 containerd[1450]: time="2024-07-02T08:26:17.503462685Z" level=info msg="StopPodSandbox for \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\"" Jul 2 08:26:17.504079 containerd[1450]: time="2024-07-02T08:26:17.503774038Z" level=info msg="StopPodSandbox for \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\"" Jul 2 08:26:17.504079 containerd[1450]: time="2024-07-02T08:26:17.504047107Z" level=info msg="Ensure that sandbox bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61 in task-service has been cleanup successfully" Jul 2 08:26:17.504643 containerd[1450]: time="2024-07-02T08:26:17.504596166Z" level=info msg="Ensure that sandbox 70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16 in task-service has been cleanup successfully" Jul 2 08:26:17.505115 kubelet[2534]: I0702 08:26:17.505084 2534 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:17.506001 containerd[1450]: time="2024-07-02T08:26:17.505921229Z" level=info msg="StopPodSandbox for \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\"" Jul 2 08:26:17.507087 containerd[1450]: time="2024-07-02T08:26:17.506995704Z" level=info msg="Ensure that sandbox ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56 in task-service has been cleanup successfully" Jul 2 08:26:17.539284 containerd[1450]: time="2024-07-02T08:26:17.539212444Z" level=error msg="StopPodSandbox for \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\" failed" error="failed to destroy network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.541540 kubelet[2534]: E0702 08:26:17.539428 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:17.541540 kubelet[2534]: E0702 08:26:17.539493 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6"} Jul 2 08:26:17.541540 kubelet[2534]: E0702 08:26:17.539551 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d914a969-86d7-48ab-ad5e-c9413cd5d500\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:26:17.541540 kubelet[2534]: E0702 08:26:17.539572 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d914a969-86d7-48ab-ad5e-c9413cd5d500\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855cc674f5-jdjkz" podUID="d914a969-86d7-48ab-ad5e-c9413cd5d500" Jul 2 08:26:17.547169 containerd[1450]: time="2024-07-02T08:26:17.547108692Z" level=error msg="StopPodSandbox for \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\" failed" error="failed to destroy network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.547568 kubelet[2534]: E0702 08:26:17.547421 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:17.547568 kubelet[2534]: E0702 08:26:17.547474 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61"} Jul 2 08:26:17.547568 kubelet[2534]: E0702 08:26:17.547514 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"84bf0464-f310-4576-b2d3-b41310345c86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:26:17.547568 kubelet[2534]: E0702 08:26:17.547538 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"84bf0464-f310-4576-b2d3-b41310345c86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7gjkg" podUID="84bf0464-f310-4576-b2d3-b41310345c86" Jul 2 08:26:17.548580 containerd[1450]: time="2024-07-02T08:26:17.548527765Z" level=error msg="StopPodSandbox for \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\" failed" error="failed to destroy network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.548718 kubelet[2534]: E0702 08:26:17.548675 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:17.548764 kubelet[2534]: E0702 08:26:17.548724 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16"} Jul 2 08:26:17.548764 kubelet[2534]: E0702 08:26:17.548752 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c910d3f1-3683-45dd-a240-2bad751e8d45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:26:17.548833 kubelet[2534]: E0702 08:26:17.548781 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c910d3f1-3683-45dd-a240-2bad751e8d45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tvnp6" podUID="c910d3f1-3683-45dd-a240-2bad751e8d45" Jul 2 08:26:17.550702 containerd[1450]: time="2024-07-02T08:26:17.550649953Z" level=error msg="StopPodSandbox for \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\" failed" error="failed to destroy network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 08:26:17.550952 kubelet[2534]: E0702 08:26:17.550812 2534 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:17.550952 kubelet[2534]: E0702 08:26:17.550843 2534 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56"} Jul 2 08:26:17.550952 kubelet[2534]: E0702 08:26:17.550866 2534 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d162ea36-1704-488d-8dd5-39e7c8e42991\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 08:26:17.550952 kubelet[2534]: E0702 08:26:17.550883 2534 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d162ea36-1704-488d-8dd5-39e7c8e42991\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rljt6" podUID="d162ea36-1704-488d-8dd5-39e7c8e42991" Jul 2 08:26:18.270998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61-shm.mount: Deactivated successfully. Jul 2 08:26:18.271099 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6-shm.mount: Deactivated successfully. Jul 2 08:26:20.203781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405245024.mount: Deactivated successfully. Jul 2 08:26:20.497901 containerd[1450]: time="2024-07-02T08:26:20.497763388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:20.498584 containerd[1450]: time="2024-07-02T08:26:20.498544824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 08:26:20.499218 containerd[1450]: time="2024-07-02T08:26:20.499183086Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:20.516661 containerd[1450]: time="2024-07-02T08:26:20.516586133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:20.517969 containerd[1450]: time="2024-07-02T08:26:20.517812292Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 3.016778109s" Jul 2 08:26:20.517969 containerd[1450]: time="2024-07-02T08:26:20.517850416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 08:26:20.534742 containerd[1450]: time="2024-07-02T08:26:20.533650748Z" level=info msg="CreateContainer within sandbox \"3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 08:26:20.635745 containerd[1450]: time="2024-07-02T08:26:20.635650480Z" level=info msg="CreateContainer within sandbox \"3c56e7c6f4c735d1b07d4574d55b494dc82c42ee1ccc1dd4b43d2b3efa92ca45\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"424322d321d4969e23201c9ce76e408e6ca4d47e7b2621ba553832e8c8d04304\"" Jul 2 08:26:20.636663 containerd[1450]: time="2024-07-02T08:26:20.636345587Z" level=info msg="StartContainer for \"424322d321d4969e23201c9ce76e408e6ca4d47e7b2621ba553832e8c8d04304\"" Jul 2 08:26:20.679855 systemd[1]: Started cri-containerd-424322d321d4969e23201c9ce76e408e6ca4d47e7b2621ba553832e8c8d04304.scope - libcontainer container 424322d321d4969e23201c9ce76e408e6ca4d47e7b2621ba553832e8c8d04304. Jul 2 08:26:20.705961 containerd[1450]: time="2024-07-02T08:26:20.705919854Z" level=info msg="StartContainer for \"424322d321d4969e23201c9ce76e408e6ca4d47e7b2621ba553832e8c8d04304\" returns successfully" Jul 2 08:26:20.918359 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 08:26:20.918469 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jul 2 08:26:21.221180 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:51240.service - OpenSSH per-connection server daemon (10.0.0.1:51240). Jul 2 08:26:21.258525 sshd[3547]: Accepted publickey for core from 10.0.0.1 port 51240 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:21.259785 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:21.263097 systemd-logind[1428]: New session 8 of user core. Jul 2 08:26:21.273894 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 08:26:21.407979 sshd[3547]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:21.410460 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:51240.service: Deactivated successfully. Jul 2 08:26:21.411974 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 08:26:21.413921 systemd-logind[1428]: Session 8 logged out. Waiting for processes to exit. Jul 2 08:26:21.414673 systemd-logind[1428]: Removed session 8. Jul 2 08:26:21.519177 kubelet[2534]: E0702 08:26:21.518999 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:21.528682 kubelet[2534]: I0702 08:26:21.528627 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l9pzc" podStartSLOduration=1.599707988 podStartE2EDuration="12.528615427s" podCreationTimestamp="2024-07-02 08:26:09 +0000 UTC" firstStartedPulling="2024-07-02 08:26:09.59113751 +0000 UTC m=+22.273367563" lastFinishedPulling="2024-07-02 08:26:20.520044909 +0000 UTC m=+33.202275002" observedRunningTime="2024-07-02 08:26:21.527363229 +0000 UTC m=+34.209593322" watchObservedRunningTime="2024-07-02 08:26:21.528615427 +0000 UTC m=+34.210845520" Jul 2 08:26:22.518669 kubelet[2534]: E0702 08:26:22.518638 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:26.421866 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:51252.service - OpenSSH per-connection server daemon (10.0.0.1:51252). Jul 2 08:26:26.466166 sshd[3788]: Accepted publickey for core from 10.0.0.1 port 51252 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:26.467644 sshd[3788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:26.477220 systemd-logind[1428]: New session 9 of user core. Jul 2 08:26:26.486908 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 08:26:26.605166 sshd[3788]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:26.608432 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:51252.service: Deactivated successfully. Jul 2 08:26:26.610345 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 08:26:26.611064 systemd-logind[1428]: Session 9 logged out. Waiting for processes to exit. Jul 2 08:26:26.611823 systemd-logind[1428]: Removed session 9. Jul 2 08:26:27.923768 kubelet[2534]: I0702 08:26:27.923521 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:26:27.924790 kubelet[2534]: E0702 08:26:27.924762 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:28.533189 kubelet[2534]: E0702 08:26:28.533153 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:28.579201 systemd-networkd[1384]: vxlan.calico: Link UP Jul 2 08:26:28.579214 systemd-networkd[1384]: vxlan.calico: Gained carrier Jul 2 08:26:29.396766 containerd[1450]: time="2024-07-02T08:26:29.396247290Z" level=info msg="StopPodSandbox for \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\"" Jul 2 08:26:29.396766 containerd[1450]: time="2024-07-02T08:26:29.396389661Z" level=info msg="StopPodSandbox for \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\"" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.475 [INFO][4009] k8s.go 608: Cleaning up netns ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.475 [INFO][4009] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" iface="eth0" netns="/var/run/netns/cni-f4519764-02b4-a167-2ac1-a326269f2719" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4009] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" iface="eth0" netns="/var/run/netns/cni-f4519764-02b4-a167-2ac1-a326269f2719" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4009] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" iface="eth0" netns="/var/run/netns/cni-f4519764-02b4-a167-2ac1-a326269f2719" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4009] k8s.go 615: Releasing IP address(es) ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4009] utils.go 188: Calico CNI releasing IP address ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.609 [INFO][4026] ipam_plugin.go 411: Releasing address using handleID ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.609 [INFO][4026] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.609 [INFO][4026] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.622 [WARNING][4026] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.622 [INFO][4026] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.623 [INFO][4026] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:29.628027 containerd[1450]: 2024-07-02 08:26:29.626 [INFO][4009] k8s.go 621: Teardown processing complete. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:29.629495 containerd[1450]: time="2024-07-02T08:26:29.629267315Z" level=info msg="TearDown network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\" successfully" Jul 2 08:26:29.629495 containerd[1450]: time="2024-07-02T08:26:29.629302237Z" level=info msg="StopPodSandbox for \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\" returns successfully" Jul 2 08:26:29.630101 containerd[1450]: time="2024-07-02T08:26:29.630005410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjkg,Uid:84bf0464-f310-4576-b2d3-b41310345c86,Namespace:calico-system,Attempt:1,}" Jul 2 08:26:29.630789 systemd[1]: run-netns-cni\x2df4519764\x2d02b4\x2da167\x2d2ac1\x2da326269f2719.mount: Deactivated successfully. Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4010] k8s.go 608: Cleaning up netns ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4010] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" iface="eth0" netns="/var/run/netns/cni-11eb554a-4222-0240-cfd7-549dacb13192" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4010] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" iface="eth0" netns="/var/run/netns/cni-11eb554a-4222-0240-cfd7-549dacb13192" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4010] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" iface="eth0" netns="/var/run/netns/cni-11eb554a-4222-0240-cfd7-549dacb13192" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4010] k8s.go 615: Releasing IP address(es) ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.476 [INFO][4010] utils.go 188: Calico CNI releasing IP address ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.609 [INFO][4027] ipam_plugin.go 411: Releasing address using handleID ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.609 [INFO][4027] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.623 [INFO][4027] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.635 [WARNING][4027] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.635 [INFO][4027] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.636 [INFO][4027] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:29.640859 containerd[1450]: 2024-07-02 08:26:29.638 [INFO][4010] k8s.go 621: Teardown processing complete. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:29.640859 containerd[1450]: time="2024-07-02T08:26:29.640754582Z" level=info msg="TearDown network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\" successfully" Jul 2 08:26:29.640859 containerd[1450]: time="2024-07-02T08:26:29.640818066Z" level=info msg="StopPodSandbox for \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\" returns successfully" Jul 2 08:26:29.642617 kubelet[2534]: E0702 08:26:29.641981 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:29.644356 containerd[1450]: time="2024-07-02T08:26:29.642309859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rljt6,Uid:d162ea36-1704-488d-8dd5-39e7c8e42991,Namespace:kube-system,Attempt:1,}" Jul 2 08:26:29.643156 systemd[1]: run-netns-cni\x2d11eb554a\x2d4222\x2d0240\x2dcfd7\x2d549dacb13192.mount: Deactivated successfully. Jul 2 08:26:29.784069 systemd-networkd[1384]: cali71f1e6852c9: Link UP Jul 2 08:26:29.784748 systemd-networkd[1384]: cali71f1e6852c9: Gained carrier Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.691 [INFO][4044] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7gjkg-eth0 csi-node-driver- calico-system 84bf0464-f310-4576-b2d3-b41310345c86 828 0 2024-07-02 08:26:09 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-7gjkg eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali71f1e6852c9 [] []}} ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.692 [INFO][4044] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.729 [INFO][4068] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" HandleID="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.747 [INFO][4068] ipam_plugin.go 264: Auto assigning IP ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" HandleID="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058fb80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7gjkg", "timestamp":"2024-07-02 08:26:29.729339746 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.747 [INFO][4068] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.747 [INFO][4068] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.747 [INFO][4068] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.749 [INFO][4068] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.755 [INFO][4068] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.759 [INFO][4068] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.761 [INFO][4068] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.765 [INFO][4068] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.765 [INFO][4068] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.766 [INFO][4068] ipam.go 1685: Creating new handle: k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.770 [INFO][4068] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.774 [INFO][4068] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.774 [INFO][4068] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" host="localhost" Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.774 [INFO][4068] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:29.799646 containerd[1450]: 2024-07-02 08:26:29.774 [INFO][4068] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" HandleID="k8s-pod-network.7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.800326 containerd[1450]: 2024-07-02 08:26:29.778 [INFO][4044] k8s.go 386: Populated endpoint ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gjkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84bf0464-f310-4576-b2d3-b41310345c86", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7gjkg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali71f1e6852c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:29.800326 containerd[1450]: 2024-07-02 08:26:29.778 [INFO][4044] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.800326 containerd[1450]: 2024-07-02 08:26:29.778 [INFO][4044] dataplane_linux.go 68: Setting the host side veth name to cali71f1e6852c9 ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.800326 containerd[1450]: 2024-07-02 08:26:29.784 [INFO][4044] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.800326 containerd[1450]: 2024-07-02 08:26:29.786 [INFO][4044] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gjkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84bf0464-f310-4576-b2d3-b41310345c86", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b", Pod:"csi-node-driver-7gjkg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali71f1e6852c9", MAC:"b2:3f:79:89:6f:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:29.800326 containerd[1450]: 2024-07-02 08:26:29.797 [INFO][4044] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b" Namespace="calico-system" Pod="csi-node-driver-7gjkg" WorkloadEndpoint="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:29.820655 systemd-networkd[1384]: caliac157317798: Link UP Jul 2 08:26:29.821161 systemd-networkd[1384]: caliac157317798: Gained carrier Jul 2 08:26:29.825358 containerd[1450]: time="2024-07-02T08:26:29.824035772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:29.825358 containerd[1450]: time="2024-07-02T08:26:29.824135900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:29.825358 containerd[1450]: time="2024-07-02T08:26:29.824167022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:29.825358 containerd[1450]: time="2024-07-02T08:26:29.824203705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.724 [INFO][4057] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0 coredns-7db6d8ff4d- kube-system d162ea36-1704-488d-8dd5-39e7c8e42991 829 0 2024-07-02 08:26:03 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-rljt6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac157317798 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.725 [INFO][4057] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.752 [INFO][4077] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" HandleID="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.766 [INFO][4077] ipam_plugin.go 264: Auto assigning IP ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" HandleID="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e8350), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-rljt6", "timestamp":"2024-07-02 08:26:29.752891604 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.766 [INFO][4077] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.774 [INFO][4077] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.774 [INFO][4077] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.777 [INFO][4077] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.787 [INFO][4077] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.795 [INFO][4077] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.798 [INFO][4077] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.802 [INFO][4077] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.802 [INFO][4077] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.804 [INFO][4077] ipam.go 1685: Creating new handle: k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783 Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.807 [INFO][4077] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.816 [INFO][4077] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.816 [INFO][4077] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" host="localhost" Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.816 [INFO][4077] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:29.833836 containerd[1450]: 2024-07-02 08:26:29.816 [INFO][4077] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" HandleID="k8s-pod-network.805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.834365 containerd[1450]: 2024-07-02 08:26:29.818 [INFO][4057] k8s.go 386: Populated endpoint ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d162ea36-1704-488d-8dd5-39e7c8e42991", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-rljt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac157317798", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:29.834365 containerd[1450]: 2024-07-02 08:26:29.819 [INFO][4057] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.834365 containerd[1450]: 2024-07-02 08:26:29.819 [INFO][4057] dataplane_linux.go 68: Setting the host side veth name to caliac157317798 ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.834365 containerd[1450]: 2024-07-02 08:26:29.821 [INFO][4057] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.834365 containerd[1450]: 2024-07-02 08:26:29.821 [INFO][4057] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d162ea36-1704-488d-8dd5-39e7c8e42991", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783", Pod:"coredns-7db6d8ff4d-rljt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac157317798", MAC:"e2:0e:85:d8:13:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:29.834365 containerd[1450]: 2024-07-02 08:26:29.830 [INFO][4057] k8s.go 500: Wrote updated endpoint to datastore ContainerID="805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rljt6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:29.856571 containerd[1450]: time="2024-07-02T08:26:29.856285286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:29.856571 containerd[1450]: time="2024-07-02T08:26:29.856361532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:29.856571 containerd[1450]: time="2024-07-02T08:26:29.856375493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:29.856571 containerd[1450]: time="2024-07-02T08:26:29.856384933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:29.861886 systemd[1]: Started cri-containerd-7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b.scope - libcontainer container 7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b. Jul 2 08:26:29.867393 systemd[1]: Started cri-containerd-805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783.scope - libcontainer container 805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783. Jul 2 08:26:29.874701 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:26:29.879930 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:26:29.891123 containerd[1450]: time="2024-07-02T08:26:29.890974824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7gjkg,Uid:84bf0464-f310-4576-b2d3-b41310345c86,Namespace:calico-system,Attempt:1,} returns sandbox id \"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b\"" Jul 2 08:26:29.892749 containerd[1450]: time="2024-07-02T08:26:29.892646790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 08:26:29.902283 containerd[1450]: time="2024-07-02T08:26:29.902248954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rljt6,Uid:d162ea36-1704-488d-8dd5-39e7c8e42991,Namespace:kube-system,Attempt:1,} returns sandbox id \"805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783\"" Jul 2 08:26:29.902941 kubelet[2534]: E0702 08:26:29.902918 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:29.905545 containerd[1450]: time="2024-07-02T08:26:29.905449476Z" level=info msg="CreateContainer within sandbox \"805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:26:29.918817 containerd[1450]: time="2024-07-02T08:26:29.918764761Z" level=info msg="CreateContainer within sandbox \"805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25e12c6d9a465726e12414b0c6041f195ab58ec1a311d5a02c32ec2b0a2c1a80\"" Jul 2 08:26:29.919317 containerd[1450]: time="2024-07-02T08:26:29.919270759Z" level=info msg="StartContainer for \"25e12c6d9a465726e12414b0c6041f195ab58ec1a311d5a02c32ec2b0a2c1a80\"" Jul 2 08:26:29.943877 systemd[1]: Started cri-containerd-25e12c6d9a465726e12414b0c6041f195ab58ec1a311d5a02c32ec2b0a2c1a80.scope - libcontainer container 25e12c6d9a465726e12414b0c6041f195ab58ec1a311d5a02c32ec2b0a2c1a80. Jul 2 08:26:29.966745 containerd[1450]: time="2024-07-02T08:26:29.966683417Z" level=info msg="StartContainer for \"25e12c6d9a465726e12414b0c6041f195ab58ec1a311d5a02c32ec2b0a2c1a80\" returns successfully" Jul 2 08:26:30.185891 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Jul 2 08:26:30.541674 kubelet[2534]: E0702 08:26:30.541332 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:30.551284 kubelet[2534]: I0702 08:26:30.551118 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rljt6" podStartSLOduration=27.551075497 podStartE2EDuration="27.551075497s" podCreationTimestamp="2024-07-02 08:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:26:30.549845926 +0000 UTC m=+43.232076019" watchObservedRunningTime="2024-07-02 08:26:30.551075497 +0000 UTC m=+43.233305550" Jul 2 08:26:30.831120 containerd[1450]: time="2024-07-02T08:26:30.831019504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:30.831934 containerd[1450]: time="2024-07-02T08:26:30.831797361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 08:26:30.832771 containerd[1450]: time="2024-07-02T08:26:30.832603581Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:30.834899 containerd[1450]: time="2024-07-02T08:26:30.834849106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:30.835555 containerd[1450]: time="2024-07-02T08:26:30.835525596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 942.840164ms" Jul 2 08:26:30.835609 containerd[1450]: time="2024-07-02T08:26:30.835559999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 08:26:30.838893 containerd[1450]: time="2024-07-02T08:26:30.838854202Z" level=info msg="CreateContainer within sandbox \"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 08:26:30.849675 containerd[1450]: time="2024-07-02T08:26:30.849580433Z" level=info msg="CreateContainer within sandbox \"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c38a8e5c793209f5152859f5b0911cc2d220610985e065d6c237d474d2957550\"" Jul 2 08:26:30.850282 containerd[1450]: time="2024-07-02T08:26:30.850257403Z" level=info msg="StartContainer for \"c38a8e5c793209f5152859f5b0911cc2d220610985e065d6c237d474d2957550\"" Jul 2 08:26:30.878843 systemd[1]: Started cri-containerd-c38a8e5c793209f5152859f5b0911cc2d220610985e065d6c237d474d2957550.scope - libcontainer container c38a8e5c793209f5152859f5b0911cc2d220610985e065d6c237d474d2957550. Jul 2 08:26:30.889976 systemd-networkd[1384]: cali71f1e6852c9: Gained IPv6LL Jul 2 08:26:30.903955 containerd[1450]: time="2024-07-02T08:26:30.903914680Z" level=info msg="StartContainer for \"c38a8e5c793209f5152859f5b0911cc2d220610985e065d6c237d474d2957550\" returns successfully" Jul 2 08:26:30.906825 containerd[1450]: time="2024-07-02T08:26:30.906613279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 08:26:31.392123 containerd[1450]: time="2024-07-02T08:26:31.392077580Z" level=info msg="StopPodSandbox for \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\"" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.431 [INFO][4295] k8s.go 608: Cleaning up netns ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.431 [INFO][4295] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" iface="eth0" netns="/var/run/netns/cni-37482c15-3cb6-7a82-89b1-f9daa41c405a" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.432 [INFO][4295] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" iface="eth0" netns="/var/run/netns/cni-37482c15-3cb6-7a82-89b1-f9daa41c405a" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.432 [INFO][4295] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" iface="eth0" netns="/var/run/netns/cni-37482c15-3cb6-7a82-89b1-f9daa41c405a" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.432 [INFO][4295] k8s.go 615: Releasing IP address(es) ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.432 [INFO][4295] utils.go 188: Calico CNI releasing IP address ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.450 [INFO][4303] ipam_plugin.go 411: Releasing address using handleID ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.450 [INFO][4303] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.450 [INFO][4303] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.458 [WARNING][4303] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.458 [INFO][4303] ipam_plugin.go 439: Releasing address using workloadID ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.459 [INFO][4303] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:31.462305 containerd[1450]: 2024-07-02 08:26:31.460 [INFO][4295] k8s.go 621: Teardown processing complete. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:31.464107 containerd[1450]: time="2024-07-02T08:26:31.462391773Z" level=info msg="TearDown network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\" successfully" Jul 2 08:26:31.464107 containerd[1450]: time="2024-07-02T08:26:31.462417095Z" level=info msg="StopPodSandbox for \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\" returns successfully" Jul 2 08:26:31.464107 containerd[1450]: time="2024-07-02T08:26:31.463088343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvnp6,Uid:c910d3f1-3683-45dd-a240-2bad751e8d45,Namespace:kube-system,Attempt:1,}" Jul 2 08:26:31.464183 kubelet[2534]: E0702 08:26:31.462724 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:31.467378 systemd[1]: run-netns-cni\x2d37482c15\x2d3cb6\x2d7a82\x2d89b1\x2df9daa41c405a.mount: Deactivated successfully. Jul 2 08:26:31.544614 kubelet[2534]: E0702 08:26:31.544583 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:31.621077 systemd-networkd[1384]: calif7c8c3c3156: Link UP Jul 2 08:26:31.621275 systemd-networkd[1384]: calif7c8c3c3156: Gained carrier Jul 2 08:26:31.623076 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:40110.service - OpenSSH per-connection server daemon (10.0.0.1:40110). Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.527 [INFO][4310] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0 coredns-7db6d8ff4d- kube-system c910d3f1-3683-45dd-a240-2bad751e8d45 859 0 2024-07-02 08:26:03 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-tvnp6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7c8c3c3156 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.527 [INFO][4310] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.569 [INFO][4324] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" HandleID="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.583 [INFO][4324] ipam_plugin.go 264: Auto assigning IP ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" HandleID="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001627d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-tvnp6", "timestamp":"2024-07-02 08:26:31.569052669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.583 [INFO][4324] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.583 [INFO][4324] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.583 [INFO][4324] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.585 [INFO][4324] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.590 [INFO][4324] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.598 [INFO][4324] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.601 [INFO][4324] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.603 [INFO][4324] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.603 [INFO][4324] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.604 [INFO][4324] ipam.go 1685: Creating new handle: k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9 Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.608 [INFO][4324] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.613 [INFO][4324] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.613 [INFO][4324] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" host="localhost" Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.613 [INFO][4324] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:31.639017 containerd[1450]: 2024-07-02 08:26:31.613 [INFO][4324] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" HandleID="k8s-pod-network.79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.639627 containerd[1450]: 2024-07-02 08:26:31.616 [INFO][4310] k8s.go 386: Populated endpoint ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c910d3f1-3683-45dd-a240-2bad751e8d45", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-tvnp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c8c3c3156", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:31.639627 containerd[1450]: 2024-07-02 08:26:31.616 [INFO][4310] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.639627 containerd[1450]: 2024-07-02 08:26:31.616 [INFO][4310] dataplane_linux.go 68: Setting the host side veth name to calif7c8c3c3156 ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.639627 containerd[1450]: 2024-07-02 08:26:31.621 [INFO][4310] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.639627 containerd[1450]: 2024-07-02 08:26:31.622 [INFO][4310] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c910d3f1-3683-45dd-a240-2bad751e8d45", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9", Pod:"coredns-7db6d8ff4d-tvnp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c8c3c3156", MAC:"ea:11:5b:99:0c:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:31.639627 containerd[1450]: 2024-07-02 08:26:31.636 [INFO][4310] k8s.go 500: Wrote updated endpoint to datastore ContainerID="79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvnp6" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:31.672898 containerd[1450]: time="2024-07-02T08:26:31.670846134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:31.673028 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 40110 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:31.673976 containerd[1450]: time="2024-07-02T08:26:31.673630255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:31.673976 containerd[1450]: time="2024-07-02T08:26:31.673669938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:31.673976 containerd[1450]: time="2024-07-02T08:26:31.673694020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:31.674039 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:31.685702 systemd-logind[1428]: New session 10 of user core. Jul 2 08:26:31.695906 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 08:26:31.705956 systemd[1]: Started cri-containerd-79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9.scope - libcontainer container 79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9. Jul 2 08:26:31.723471 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:26:31.772309 containerd[1450]: time="2024-07-02T08:26:31.771343145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvnp6,Uid:c910d3f1-3683-45dd-a240-2bad751e8d45,Namespace:kube-system,Attempt:1,} returns sandbox id \"79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9\"" Jul 2 08:26:31.774419 kubelet[2534]: E0702 08:26:31.774388 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:31.777035 containerd[1450]: time="2024-07-02T08:26:31.776858783Z" level=info msg="CreateContainer within sandbox \"79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 08:26:31.786864 systemd-networkd[1384]: caliac157317798: Gained IPv6LL Jul 2 08:26:31.807990 containerd[1450]: time="2024-07-02T08:26:31.807943546Z" level=info msg="CreateContainer within sandbox \"79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3e6e6fc1b09533f4cadd6b3d80897a86d5585bca8a21916922b9c7e5c1c3447\"" Jul 2 08:26:31.809227 containerd[1450]: time="2024-07-02T08:26:31.808664278Z" level=info msg="StartContainer for \"a3e6e6fc1b09533f4cadd6b3d80897a86d5585bca8a21916922b9c7e5c1c3447\"" Jul 2 08:26:31.834901 systemd[1]: Started cri-containerd-a3e6e6fc1b09533f4cadd6b3d80897a86d5585bca8a21916922b9c7e5c1c3447.scope - libcontainer container a3e6e6fc1b09533f4cadd6b3d80897a86d5585bca8a21916922b9c7e5c1c3447. Jul 2 08:26:31.876807 containerd[1450]: time="2024-07-02T08:26:31.876763432Z" level=info msg="StartContainer for \"a3e6e6fc1b09533f4cadd6b3d80897a86d5585bca8a21916922b9c7e5c1c3447\" returns successfully" Jul 2 08:26:31.884980 sshd[4338]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:31.895309 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:40110.service: Deactivated successfully. Jul 2 08:26:31.899446 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 08:26:31.900559 systemd-logind[1428]: Session 10 logged out. Waiting for processes to exit. Jul 2 08:26:31.908029 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:40120.service - OpenSSH per-connection server daemon (10.0.0.1:40120). Jul 2 08:26:31.908847 systemd-logind[1428]: Removed session 10. Jul 2 08:26:31.941415 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 40120 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:31.942780 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:31.946757 systemd-logind[1428]: New session 11 of user core. Jul 2 08:26:31.951879 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 08:26:31.984363 containerd[1450]: time="2024-07-02T08:26:31.984311832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:31.985133 containerd[1450]: time="2024-07-02T08:26:31.984938677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 08:26:31.985863 containerd[1450]: time="2024-07-02T08:26:31.985828741Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:31.988115 containerd[1450]: time="2024-07-02T08:26:31.987814125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:31.988666 containerd[1450]: time="2024-07-02T08:26:31.988553018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.081897415s" Jul 2 08:26:31.988666 containerd[1450]: time="2024-07-02T08:26:31.988586700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 08:26:31.991919 containerd[1450]: time="2024-07-02T08:26:31.991886299Z" level=info msg="CreateContainer within sandbox \"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 08:26:32.002188 containerd[1450]: time="2024-07-02T08:26:32.002142358Z" level=info msg="CreateContainer within sandbox \"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d64bc58161daf396c78a16250beca5d5fe9528c259a918018115f841e6c043c0\"" Jul 2 08:26:32.002538 containerd[1450]: time="2024-07-02T08:26:32.002509744Z" level=info msg="StartContainer for \"d64bc58161daf396c78a16250beca5d5fe9528c259a918018115f841e6c043c0\"" Jul 2 08:26:32.031965 systemd[1]: Started cri-containerd-d64bc58161daf396c78a16250beca5d5fe9528c259a918018115f841e6c043c0.scope - libcontainer container d64bc58161daf396c78a16250beca5d5fe9528c259a918018115f841e6c043c0. Jul 2 08:26:32.061236 containerd[1450]: time="2024-07-02T08:26:32.060907070Z" level=info msg="StartContainer for \"d64bc58161daf396c78a16250beca5d5fe9528c259a918018115f841e6c043c0\" returns successfully" Jul 2 08:26:32.127050 sshd[4452]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:32.135389 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:40120.service: Deactivated successfully. Jul 2 08:26:32.142391 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 08:26:32.148856 systemd-logind[1428]: Session 11 logged out. Waiting for processes to exit. Jul 2 08:26:32.159994 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:40130.service - OpenSSH per-connection server daemon (10.0.0.1:40130). Jul 2 08:26:32.161172 systemd-logind[1428]: Removed session 11. Jul 2 08:26:32.194960 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 40130 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:32.196636 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:32.201003 systemd-logind[1428]: New session 12 of user core. Jul 2 08:26:32.206902 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 08:26:32.319631 sshd[4506]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:32.322773 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:40130.service: Deactivated successfully. Jul 2 08:26:32.324614 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 08:26:32.325196 systemd-logind[1428]: Session 12 logged out. Waiting for processes to exit. Jul 2 08:26:32.326018 systemd-logind[1428]: Removed session 12. Jul 2 08:26:32.391910 containerd[1450]: time="2024-07-02T08:26:32.391834451Z" level=info msg="StopPodSandbox for \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\"" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.429 [INFO][4536] k8s.go 608: Cleaning up netns ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.429 [INFO][4536] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" iface="eth0" netns="/var/run/netns/cni-421c45f8-f52a-356b-bafa-31af9bfa8ec7" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.429 [INFO][4536] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" iface="eth0" netns="/var/run/netns/cni-421c45f8-f52a-356b-bafa-31af9bfa8ec7" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.430 [INFO][4536] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" iface="eth0" netns="/var/run/netns/cni-421c45f8-f52a-356b-bafa-31af9bfa8ec7" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.430 [INFO][4536] k8s.go 615: Releasing IP address(es) ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.430 [INFO][4536] utils.go 188: Calico CNI releasing IP address ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.448 [INFO][4544] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.448 [INFO][4544] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.448 [INFO][4544] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.456 [WARNING][4544] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.456 [INFO][4544] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.457 [INFO][4544] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:32.460813 containerd[1450]: 2024-07-02 08:26:32.459 [INFO][4536] k8s.go 621: Teardown processing complete. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:32.461169 containerd[1450]: time="2024-07-02T08:26:32.460962935Z" level=info msg="TearDown network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\" successfully" Jul 2 08:26:32.461169 containerd[1450]: time="2024-07-02T08:26:32.460992777Z" level=info msg="StopPodSandbox for \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\" returns successfully" Jul 2 08:26:32.461864 containerd[1450]: time="2024-07-02T08:26:32.461549457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855cc674f5-jdjkz,Uid:d914a969-86d7-48ab-ad5e-c9413cd5d500,Namespace:calico-system,Attempt:1,}" Jul 2 08:26:32.483890 kubelet[2534]: I0702 08:26:32.483868 2534 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 08:26:32.485798 kubelet[2534]: I0702 08:26:32.484776 2534 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 08:26:32.555556 kubelet[2534]: E0702 08:26:32.555524 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:32.555702 kubelet[2534]: E0702 08:26:32.555529 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:32.558826 kubelet[2534]: I0702 08:26:32.558643 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7gjkg" podStartSLOduration=21.461596125 podStartE2EDuration="23.558629956s" podCreationTimestamp="2024-07-02 08:26:09 +0000 UTC" firstStartedPulling="2024-07-02 08:26:29.892102029 +0000 UTC m=+42.574332122" lastFinishedPulling="2024-07-02 08:26:31.9891359 +0000 UTC m=+44.671365953" observedRunningTime="2024-07-02 08:26:32.558523068 +0000 UTC m=+45.240753161" watchObservedRunningTime="2024-07-02 08:26:32.558629956 +0000 UTC m=+45.240860049" Jul 2 08:26:32.577455 systemd-networkd[1384]: calic65c659a5c0: Link UP Jul 2 08:26:32.577930 systemd-networkd[1384]: calic65c659a5c0: Gained carrier Jul 2 08:26:32.587067 kubelet[2534]: I0702 08:26:32.587003 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tvnp6" podStartSLOduration=29.586983999 podStartE2EDuration="29.586983999s" podCreationTimestamp="2024-07-02 08:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 08:26:32.572251278 +0000 UTC m=+45.254481371" watchObservedRunningTime="2024-07-02 08:26:32.586983999 +0000 UTC m=+45.269214092" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.504 [INFO][4552] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0 calico-kube-controllers-855cc674f5- calico-system d914a969-86d7-48ab-ad5e-c9413cd5d500 903 0 2024-07-02 08:26:09 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:855cc674f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-855cc674f5-jdjkz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic65c659a5c0 [] []}} ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.504 [INFO][4552] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.529 [INFO][4567] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" HandleID="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.539 [INFO][4567] ipam_plugin.go 264: Auto assigning IP ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" HandleID="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d8f00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-855cc674f5-jdjkz", "timestamp":"2024-07-02 08:26:32.529347407 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.539 [INFO][4567] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.539 [INFO][4567] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.539 [INFO][4567] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.540 [INFO][4567] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.543 [INFO][4567] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.547 [INFO][4567] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.550 [INFO][4567] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.552 [INFO][4567] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.552 [INFO][4567] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.554 [INFO][4567] ipam.go 1685: Creating new handle: k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970 Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.558 [INFO][4567] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.567 [INFO][4567] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.567 [INFO][4567] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" host="localhost" Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.567 [INFO][4567] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:32.594202 containerd[1450]: 2024-07-02 08:26:32.567 [INFO][4567] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" HandleID="k8s-pod-network.615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.594850 containerd[1450]: 2024-07-02 08:26:32.572 [INFO][4552] k8s.go 386: Populated endpoint ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0", GenerateName:"calico-kube-controllers-855cc674f5-", Namespace:"calico-system", SelfLink:"", UID:"d914a969-86d7-48ab-ad5e-c9413cd5d500", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855cc674f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-855cc674f5-jdjkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic65c659a5c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:32.594850 containerd[1450]: 2024-07-02 08:26:32.574 [INFO][4552] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.594850 containerd[1450]: 2024-07-02 08:26:32.574 [INFO][4552] dataplane_linux.go 68: Setting the host side veth name to calic65c659a5c0 ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.594850 containerd[1450]: 2024-07-02 08:26:32.578 [INFO][4552] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.594850 containerd[1450]: 2024-07-02 08:26:32.578 [INFO][4552] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0", GenerateName:"calico-kube-controllers-855cc674f5-", Namespace:"calico-system", SelfLink:"", UID:"d914a969-86d7-48ab-ad5e-c9413cd5d500", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855cc674f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970", Pod:"calico-kube-controllers-855cc674f5-jdjkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic65c659a5c0", MAC:"7a:b6:ff:05:d4:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:32.594850 containerd[1450]: 2024-07-02 08:26:32.587 [INFO][4552] k8s.go 500: Wrote updated endpoint to datastore ContainerID="615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970" Namespace="calico-system" Pod="calico-kube-controllers-855cc674f5-jdjkz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:32.614792 containerd[1450]: time="2024-07-02T08:26:32.613242294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 08:26:32.614792 containerd[1450]: time="2024-07-02T08:26:32.614221124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:32.614792 containerd[1450]: time="2024-07-02T08:26:32.614234925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 08:26:32.614792 containerd[1450]: time="2024-07-02T08:26:32.614243725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 08:26:32.632286 systemd[1]: Started cri-containerd-615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970.scope - libcontainer container 615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970. Jul 2 08:26:32.636882 systemd[1]: run-netns-cni\x2d421c45f8\x2df52a\x2d356b\x2dbafa\x2d31af9bfa8ec7.mount: Deactivated successfully. Jul 2 08:26:32.642645 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 08:26:32.658571 containerd[1450]: time="2024-07-02T08:26:32.658437408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855cc674f5-jdjkz,Uid:d914a969-86d7-48ab-ad5e-c9413cd5d500,Namespace:calico-system,Attempt:1,} returns sandbox id \"615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970\"" Jul 2 08:26:32.661610 containerd[1450]: time="2024-07-02T08:26:32.661583630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 08:26:33.561523 kubelet[2534]: E0702 08:26:33.561431 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:33.562581 kubelet[2534]: E0702 08:26:33.562438 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:33.641817 systemd-networkd[1384]: calif7c8c3c3156: Gained IPv6LL Jul 2 08:26:33.860814 containerd[1450]: time="2024-07-02T08:26:33.859868086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:33.860814 containerd[1450]: time="2024-07-02T08:26:33.860750227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 08:26:33.861251 containerd[1450]: time="2024-07-02T08:26:33.861211139Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:33.863093 containerd[1450]: time="2024-07-02T08:26:33.863055587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 08:26:33.864367 containerd[1450]: time="2024-07-02T08:26:33.864330875Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 1.202591914s" Jul 2 08:26:33.864468 containerd[1450]: time="2024-07-02T08:26:33.864366278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 08:26:33.871901 containerd[1450]: time="2024-07-02T08:26:33.871867717Z" level=info msg="CreateContainer within sandbox \"615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 08:26:33.892190 containerd[1450]: time="2024-07-02T08:26:33.892140961Z" level=info msg="CreateContainer within sandbox \"615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"996556b7821bdb0b0f4cfa34f2729d58aa074e44a3b58e118225ea9e142e3c7a\"" Jul 2 08:26:33.892635 containerd[1450]: time="2024-07-02T08:26:33.892610313Z" level=info msg="StartContainer for \"996556b7821bdb0b0f4cfa34f2729d58aa074e44a3b58e118225ea9e142e3c7a\"" Jul 2 08:26:33.926903 systemd[1]: Started cri-containerd-996556b7821bdb0b0f4cfa34f2729d58aa074e44a3b58e118225ea9e142e3c7a.scope - libcontainer container 996556b7821bdb0b0f4cfa34f2729d58aa074e44a3b58e118225ea9e142e3c7a. Jul 2 08:26:34.018961 containerd[1450]: time="2024-07-02T08:26:34.018894796Z" level=info msg="StartContainer for \"996556b7821bdb0b0f4cfa34f2729d58aa074e44a3b58e118225ea9e142e3c7a\" returns successfully" Jul 2 08:26:34.564418 kubelet[2534]: E0702 08:26:34.564024 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:34.574865 kubelet[2534]: I0702 08:26:34.574801 2534 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-855cc674f5-jdjkz" podStartSLOduration=24.370063013 podStartE2EDuration="25.574775636s" podCreationTimestamp="2024-07-02 08:26:09 +0000 UTC" firstStartedPulling="2024-07-02 08:26:32.660270137 +0000 UTC m=+45.342500230" lastFinishedPulling="2024-07-02 08:26:33.86498276 +0000 UTC m=+46.547212853" observedRunningTime="2024-07-02 08:26:34.57322441 +0000 UTC m=+47.255454463" watchObservedRunningTime="2024-07-02 08:26:34.574775636 +0000 UTC m=+47.257005729" Jul 2 08:26:34.601815 systemd-networkd[1384]: calic65c659a5c0: Gained IPv6LL Jul 2 08:26:35.564792 kubelet[2534]: I0702 08:26:35.564735 2534 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 08:26:35.565278 kubelet[2534]: E0702 08:26:35.565244 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:37.334849 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:40138.service - OpenSSH per-connection server daemon (10.0.0.1:40138). Jul 2 08:26:37.374042 sshd[4726]: Accepted publickey for core from 10.0.0.1 port 40138 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:37.375453 sshd[4726]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:37.379040 systemd-logind[1428]: New session 13 of user core. Jul 2 08:26:37.389851 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 08:26:37.506353 sshd[4726]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:37.520265 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:40138.service: Deactivated successfully. Jul 2 08:26:37.522014 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 08:26:37.523403 systemd-logind[1428]: Session 13 logged out. Waiting for processes to exit. Jul 2 08:26:37.525023 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:40148.service - OpenSSH per-connection server daemon (10.0.0.1:40148). Jul 2 08:26:37.526217 systemd-logind[1428]: Removed session 13. Jul 2 08:26:37.557575 sshd[4741]: Accepted publickey for core from 10.0.0.1 port 40148 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:37.558670 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:37.562433 systemd-logind[1428]: New session 14 of user core. Jul 2 08:26:37.566836 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 08:26:37.797084 sshd[4741]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:37.811183 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:40148.service: Deactivated successfully. Jul 2 08:26:37.813403 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 08:26:37.814647 systemd-logind[1428]: Session 14 logged out. Waiting for processes to exit. Jul 2 08:26:37.822197 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:40158.service - OpenSSH per-connection server daemon (10.0.0.1:40158). Jul 2 08:26:37.823127 systemd-logind[1428]: Removed session 14. Jul 2 08:26:37.856899 sshd[4757]: Accepted publickey for core from 10.0.0.1 port 40158 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:37.858148 sshd[4757]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:37.862322 systemd-logind[1428]: New session 15 of user core. Jul 2 08:26:37.871920 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 08:26:39.198999 sshd[4757]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:39.214169 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:40158.service: Deactivated successfully. Jul 2 08:26:39.218066 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 08:26:39.219971 systemd-logind[1428]: Session 15 logged out. Waiting for processes to exit. Jul 2 08:26:39.227005 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:40160.service - OpenSSH per-connection server daemon (10.0.0.1:40160). Jul 2 08:26:39.227541 systemd-logind[1428]: Removed session 15. Jul 2 08:26:39.264274 sshd[4788]: Accepted publickey for core from 10.0.0.1 port 40160 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:39.265895 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:39.269784 systemd-logind[1428]: New session 16 of user core. Jul 2 08:26:39.280960 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 08:26:39.533859 sshd[4788]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:39.546343 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:40160.service: Deactivated successfully. Jul 2 08:26:39.549119 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 08:26:39.550854 systemd-logind[1428]: Session 16 logged out. Waiting for processes to exit. Jul 2 08:26:39.564279 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:40168.service - OpenSSH per-connection server daemon (10.0.0.1:40168). Jul 2 08:26:39.567100 systemd-logind[1428]: Removed session 16. Jul 2 08:26:39.594406 sshd[4800]: Accepted publickey for core from 10.0.0.1 port 40168 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:39.596325 sshd[4800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:39.600870 systemd-logind[1428]: New session 17 of user core. Jul 2 08:26:39.610994 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 08:26:39.734130 sshd[4800]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:39.737312 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:40168.service: Deactivated successfully. Jul 2 08:26:39.739688 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 08:26:39.740657 systemd-logind[1428]: Session 17 logged out. Waiting for processes to exit. Jul 2 08:26:39.743392 systemd-logind[1428]: Removed session 17. Jul 2 08:26:40.120371 kubelet[2534]: E0702 08:26:40.120291 2534 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 08:26:44.745834 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:35184.service - OpenSSH per-connection server daemon (10.0.0.1:35184). Jul 2 08:26:44.780285 sshd[4839]: Accepted publickey for core from 10.0.0.1 port 35184 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:44.782127 sshd[4839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:44.789002 systemd-logind[1428]: New session 18 of user core. Jul 2 08:26:44.797873 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 08:26:44.915910 sshd[4839]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:44.919741 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:35184.service: Deactivated successfully. Jul 2 08:26:44.923044 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 08:26:44.926143 systemd-logind[1428]: Session 18 logged out. Waiting for processes to exit. Jul 2 08:26:44.927141 systemd-logind[1428]: Removed session 18. Jul 2 08:26:47.378065 containerd[1450]: time="2024-07-02T08:26:47.378004559Z" level=info msg="StopPodSandbox for \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\"" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.426 [WARNING][4867] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gjkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84bf0464-f310-4576-b2d3-b41310345c86", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b", Pod:"csi-node-driver-7gjkg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali71f1e6852c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.427 [INFO][4867] k8s.go 608: Cleaning up netns ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.427 [INFO][4867] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" iface="eth0" netns="" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.427 [INFO][4867] k8s.go 615: Releasing IP address(es) ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.427 [INFO][4867] utils.go 188: Calico CNI releasing IP address ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.447 [INFO][4877] ipam_plugin.go 411: Releasing address using handleID ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.447 [INFO][4877] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.447 [INFO][4877] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.458 [WARNING][4877] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.458 [INFO][4877] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.461 [INFO][4877] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:47.468737 containerd[1450]: 2024-07-02 08:26:47.463 [INFO][4867] k8s.go 621: Teardown processing complete. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.469114 containerd[1450]: time="2024-07-02T08:26:47.468723663Z" level=info msg="TearDown network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\" successfully" Jul 2 08:26:47.469114 containerd[1450]: time="2024-07-02T08:26:47.468757424Z" level=info msg="StopPodSandbox for \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\" returns successfully" Jul 2 08:26:47.469580 containerd[1450]: time="2024-07-02T08:26:47.469255053Z" level=info msg="RemovePodSandbox for \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\"" Jul 2 08:26:47.469580 containerd[1450]: time="2024-07-02T08:26:47.469290335Z" level=info msg="Forcibly stopping sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\"" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.503 [WARNING][4900] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7gjkg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"84bf0464-f310-4576-b2d3-b41310345c86", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7defa7dfde59d1d6eec16fef6f89e0da6e7dcf07e0d2d58fa801204a3e5fcf9b", Pod:"csi-node-driver-7gjkg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali71f1e6852c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.504 [INFO][4900] k8s.go 608: Cleaning up netns ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.504 [INFO][4900] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" iface="eth0" netns="" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.504 [INFO][4900] k8s.go 615: Releasing IP address(es) ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.504 [INFO][4900] utils.go 188: Calico CNI releasing IP address ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.521 [INFO][4907] ipam_plugin.go 411: Releasing address using handleID ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.521 [INFO][4907] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.521 [INFO][4907] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.529 [WARNING][4907] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.529 [INFO][4907] ipam_plugin.go 439: Releasing address using workloadID ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" HandleID="k8s-pod-network.bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Workload="localhost-k8s-csi--node--driver--7gjkg-eth0" Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.531 [INFO][4907] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:47.534429 containerd[1450]: 2024-07-02 08:26:47.532 [INFO][4900] k8s.go 621: Teardown processing complete. ContainerID="bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61" Jul 2 08:26:47.534822 containerd[1450]: time="2024-07-02T08:26:47.534460349Z" level=info msg="TearDown network for sandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\" successfully" Jul 2 08:26:47.543895 containerd[1450]: time="2024-07-02T08:26:47.543836921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:26:47.543977 containerd[1450]: time="2024-07-02T08:26:47.543928246Z" level=info msg="RemovePodSandbox \"bb4773947b9c71e9b786c7aeda2c48ee6f906865588e71547d6fba0bb8ac6d61\" returns successfully" Jul 2 08:26:47.544472 containerd[1450]: time="2024-07-02T08:26:47.544449316Z" level=info msg="StopPodSandbox for \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\"" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.577 [WARNING][4929] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c910d3f1-3683-45dd-a240-2bad751e8d45", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9", Pod:"coredns-7db6d8ff4d-tvnp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c8c3c3156", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.577 [INFO][4929] k8s.go 608: Cleaning up netns ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.577 [INFO][4929] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" iface="eth0" netns="" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.577 [INFO][4929] k8s.go 615: Releasing IP address(es) ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.577 [INFO][4929] utils.go 188: Calico CNI releasing IP address ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.597 [INFO][4937] ipam_plugin.go 411: Releasing address using handleID ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.598 [INFO][4937] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.598 [INFO][4937] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.607 [WARNING][4937] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.607 [INFO][4937] ipam_plugin.go 439: Releasing address using workloadID ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.611 [INFO][4937] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:47.615393 containerd[1450]: 2024-07-02 08:26:47.613 [INFO][4929] k8s.go 621: Teardown processing complete. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.616331 containerd[1450]: time="2024-07-02T08:26:47.615426900Z" level=info msg="TearDown network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\" successfully" Jul 2 08:26:47.616331 containerd[1450]: time="2024-07-02T08:26:47.615450861Z" level=info msg="StopPodSandbox for \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\" returns successfully" Jul 2 08:26:47.616331 containerd[1450]: time="2024-07-02T08:26:47.615780880Z" level=info msg="RemovePodSandbox for \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\"" Jul 2 08:26:47.616331 containerd[1450]: time="2024-07-02T08:26:47.615809161Z" level=info msg="Forcibly stopping sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\"" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.648 [WARNING][4959] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c910d3f1-3683-45dd-a240-2bad751e8d45", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79a449127426a6c28e34d85e72a9498db8037f57f9d71afc22b6667539864ca9", Pod:"coredns-7db6d8ff4d-tvnp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c8c3c3156", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.648 [INFO][4959] k8s.go 608: Cleaning up netns ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.648 [INFO][4959] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" iface="eth0" netns="" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.648 [INFO][4959] k8s.go 615: Releasing IP address(es) ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.648 [INFO][4959] utils.go 188: Calico CNI releasing IP address ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.667 [INFO][4967] ipam_plugin.go 411: Releasing address using handleID ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.667 [INFO][4967] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.667 [INFO][4967] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.675 [WARNING][4967] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.675 [INFO][4967] ipam_plugin.go 439: Releasing address using workloadID ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" HandleID="k8s-pod-network.70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Workload="localhost-k8s-coredns--7db6d8ff4d--tvnp6-eth0" Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.677 [INFO][4967] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:47.680753 containerd[1450]: 2024-07-02 08:26:47.678 [INFO][4959] k8s.go 621: Teardown processing complete. ContainerID="70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16" Jul 2 08:26:47.680753 containerd[1450]: time="2024-07-02T08:26:47.680565073Z" level=info msg="TearDown network for sandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\" successfully" Jul 2 08:26:47.683724 containerd[1450]: time="2024-07-02T08:26:47.683677009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:26:47.683764 containerd[1450]: time="2024-07-02T08:26:47.683752093Z" level=info msg="RemovePodSandbox \"70c1afeba3b1674739a63f943e33497144ccfe5cb4975fd56d231a0648337f16\" returns successfully" Jul 2 08:26:47.684261 containerd[1450]: time="2024-07-02T08:26:47.684226160Z" level=info msg="StopPodSandbox for \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\"" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.721 [WARNING][4989] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0", GenerateName:"calico-kube-controllers-855cc674f5-", Namespace:"calico-system", SelfLink:"", UID:"d914a969-86d7-48ab-ad5e-c9413cd5d500", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855cc674f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970", Pod:"calico-kube-controllers-855cc674f5-jdjkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic65c659a5c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.721 [INFO][4989] k8s.go 608: Cleaning up netns ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.721 [INFO][4989] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" iface="eth0" netns="" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.721 [INFO][4989] k8s.go 615: Releasing IP address(es) ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.721 [INFO][4989] utils.go 188: Calico CNI releasing IP address ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.741 [INFO][4996] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.741 [INFO][4996] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.741 [INFO][4996] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.750 [WARNING][4996] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.750 [INFO][4996] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.752 [INFO][4996] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:47.755543 containerd[1450]: 2024-07-02 08:26:47.753 [INFO][4989] k8s.go 621: Teardown processing complete. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.755986 containerd[1450]: time="2024-07-02T08:26:47.755585606Z" level=info msg="TearDown network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\" successfully" Jul 2 08:26:47.755986 containerd[1450]: time="2024-07-02T08:26:47.755610887Z" level=info msg="StopPodSandbox for \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\" returns successfully" Jul 2 08:26:47.756127 containerd[1450]: time="2024-07-02T08:26:47.756101075Z" level=info msg="RemovePodSandbox for \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\"" Jul 2 08:26:47.756196 containerd[1450]: time="2024-07-02T08:26:47.756134517Z" level=info msg="Forcibly stopping sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\"" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.793 [WARNING][5019] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0", GenerateName:"calico-kube-controllers-855cc674f5-", Namespace:"calico-system", SelfLink:"", UID:"d914a969-86d7-48ab-ad5e-c9413cd5d500", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855cc674f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"615c61f6b6d0f034f9c7e0eeabdc941063b59280f727c2b83b904f2c1d531970", Pod:"calico-kube-controllers-855cc674f5-jdjkz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic65c659a5c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.794 [INFO][5019] k8s.go 608: Cleaning up netns ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.794 [INFO][5019] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" iface="eth0" netns="" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.794 [INFO][5019] k8s.go 615: Releasing IP address(es) ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.794 [INFO][5019] utils.go 188: Calico CNI releasing IP address ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.812 [INFO][5027] ipam_plugin.go 411: Releasing address using handleID ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.812 [INFO][5027] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.812 [INFO][5027] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.821 [WARNING][5027] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.821 [INFO][5027] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" HandleID="k8s-pod-network.4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Workload="localhost-k8s-calico--kube--controllers--855cc674f5--jdjkz-eth0" Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.824 [INFO][5027] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:47.828053 containerd[1450]: 2024-07-02 08:26:47.826 [INFO][5019] k8s.go 621: Teardown processing complete. ContainerID="4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6" Jul 2 08:26:47.828551 containerd[1450]: time="2024-07-02T08:26:47.828088756Z" level=info msg="TearDown network for sandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\" successfully" Jul 2 08:26:47.855635 containerd[1450]: time="2024-07-02T08:26:47.855534032Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:26:47.855635 containerd[1450]: time="2024-07-02T08:26:47.855641439Z" level=info msg="RemovePodSandbox \"4f55a76c7467fe63d61634495dce775bff0e5149aa062871fa1477b1d5605db6\" returns successfully" Jul 2 08:26:47.856731 containerd[1450]: time="2024-07-02T08:26:47.856165308Z" level=info msg="StopPodSandbox for \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\"" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.902 [WARNING][5049] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d162ea36-1704-488d-8dd5-39e7c8e42991", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783", Pod:"coredns-7db6d8ff4d-rljt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac157317798", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.903 [INFO][5049] k8s.go 608: Cleaning up netns ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.903 [INFO][5049] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" iface="eth0" netns="" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.903 [INFO][5049] k8s.go 615: Releasing IP address(es) ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.903 [INFO][5049] utils.go 188: Calico CNI releasing IP address ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.930 [INFO][5074] ipam_plugin.go 411: Releasing address using handleID ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.930 [INFO][5074] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.931 [INFO][5074] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.939 [WARNING][5074] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.939 [INFO][5074] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.941 [INFO][5074] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:47.945840 containerd[1450]: 2024-07-02 08:26:47.943 [INFO][5049] k8s.go 621: Teardown processing complete. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:47.945840 containerd[1450]: time="2024-07-02T08:26:47.945819231Z" level=info msg="TearDown network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\" successfully" Jul 2 08:26:47.945840 containerd[1450]: time="2024-07-02T08:26:47.945844353Z" level=info msg="StopPodSandbox for \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\" returns successfully" Jul 2 08:26:47.947498 containerd[1450]: time="2024-07-02T08:26:47.947448123Z" level=info msg="RemovePodSandbox for \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\"" Jul 2 08:26:47.947571 containerd[1450]: time="2024-07-02T08:26:47.947493686Z" level=info msg="Forcibly stopping sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\"" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:47.983 [WARNING][5101] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"d162ea36-1704-488d-8dd5-39e7c8e42991", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 8, 26, 3, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"805c6a1f3b024e28f687211b2c3f8d2336a3efe1bc9cf497c8572e887bee2783", Pod:"coredns-7db6d8ff4d-rljt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac157317798", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:47.983 [INFO][5101] k8s.go 608: Cleaning up netns ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:47.983 [INFO][5101] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" iface="eth0" netns="" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:47.983 [INFO][5101] k8s.go 615: Releasing IP address(es) ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:47.983 [INFO][5101] utils.go 188: Calico CNI releasing IP address ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:48.001 [INFO][5109] ipam_plugin.go 411: Releasing address using handleID ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:48.001 [INFO][5109] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:48.001 [INFO][5109] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:48.009 [WARNING][5109] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:48.009 [INFO][5109] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" HandleID="k8s-pod-network.ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Workload="localhost-k8s-coredns--7db6d8ff4d--rljt6-eth0" Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:48.012 [INFO][5109] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 08:26:48.015937 containerd[1450]: 2024-07-02 08:26:48.014 [INFO][5101] k8s.go 621: Teardown processing complete. ContainerID="ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56" Jul 2 08:26:48.016390 containerd[1450]: time="2024-07-02T08:26:48.015974721Z" level=info msg="TearDown network for sandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\" successfully" Jul 2 08:26:48.022728 containerd[1450]: time="2024-07-02T08:26:48.022667057Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 08:26:48.022813 containerd[1450]: time="2024-07-02T08:26:48.022750701Z" level=info msg="RemovePodSandbox \"ae7c094e5996d572b7ba74499a4903286b94a850c14d288dc60b36355da3bf56\" returns successfully" Jul 2 08:26:49.926051 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:35192.service - OpenSSH per-connection server daemon (10.0.0.1:35192). Jul 2 08:26:49.961213 sshd[5132]: Accepted publickey for core from 10.0.0.1 port 35192 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:49.962574 sshd[5132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:49.966765 systemd-logind[1428]: New session 19 of user core. Jul 2 08:26:49.976892 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 08:26:50.088501 sshd[5132]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:50.091785 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:35192.service: Deactivated successfully. Jul 2 08:26:50.093889 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 08:26:50.094784 systemd-logind[1428]: Session 19 logged out. Waiting for processes to exit. Jul 2 08:26:50.095745 systemd-logind[1428]: Removed session 19. Jul 2 08:26:55.103940 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:42978.service - OpenSSH per-connection server daemon (10.0.0.1:42978). Jul 2 08:26:55.143796 sshd[5148]: Accepted publickey for core from 10.0.0.1 port 42978 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:26:55.145433 sshd[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:26:55.150694 systemd-logind[1428]: New session 20 of user core. Jul 2 08:26:55.163870 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 08:26:55.243752 kernel: hrtimer: interrupt took 4001323 ns Jul 2 08:26:55.294094 sshd[5148]: pam_unix(sshd:session): session closed for user core Jul 2 08:26:55.296743 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:42978.service: Deactivated successfully. Jul 2 08:26:55.298258 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 08:26:55.300439 systemd-logind[1428]: Session 20 logged out. Waiting for processes to exit. Jul 2 08:26:55.301222 systemd-logind[1428]: Removed session 20. Jul 2 08:27:00.305350 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:43106.service - OpenSSH per-connection server daemon (10.0.0.1:43106). Jul 2 08:27:00.344142 sshd[5167]: Accepted publickey for core from 10.0.0.1 port 43106 ssh2: RSA SHA256:9HbrhkLxpgnCs3nSG0YvoCEbdW7v1A0MAhhXbTAhXmw Jul 2 08:27:00.345347 sshd[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 08:27:00.349294 systemd-logind[1428]: New session 21 of user core. Jul 2 08:27:00.356860 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 08:27:00.471590 sshd[5167]: pam_unix(sshd:session): session closed for user core Jul 2 08:27:00.477536 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:43106.service: Deactivated successfully. Jul 2 08:27:00.479513 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 08:27:00.480166 systemd-logind[1428]: Session 21 logged out. Waiting for processes to exit. Jul 2 08:27:00.481259 systemd-logind[1428]: Removed session 21.